Gaussians with large variance get split, the others cloned. They then use the covariance of the gaussians to determine to split or to clone. They use the gradient of the view-space position to determine if more detail is needed, ie those gaussians which the optimizer wants to move significantly over the screen seems to be in a region with not enough detail. Either fill in details by cloning a gaussian in an area which is undercovered, or split a gaussian in an area which is overcovered. The next key step is to every N iterations adjust the number of gaussians. This requires the rendering step to be differentiable, so that you can back-propagate the error between the rendering and the ground-truth image. Just glossed over the paper but it seems, in principle, simple enough (though rather brilliant IMHO).Įssentially they're doing what you do when you train a neural network, only that instead of adjusting weights connecting "neurons", you adjust the shape and position of gaussians, and the coefficients of spherical harmonics for the colors. It's a pretty great time to be a processing nerd. We've shifted from heavy list/vertex transformers to giant array multiply/add processors. Slowly but surely we've swing back to high ALU balance (albeit via massive stream parallelism). The ability to push textured triangles with minimal per-pixel value adjustment took over. Once acceleration hit, transformation of triangles with fixed-function pipelines took over. I don't recall any intersections or overhangs, but, to be fair, I was a middle schooler when Comanche came out. It's been a bit since I played it, but the swimming jaggies make me think that it was Manhattan distance height map offset by planar traversal (kinda like Doom raycasting) or some similar trick. As far as I know, that was the first commercial game with voxel rendering. Gaussian voxel reconstruction is useful in medical and GIS settings, which, if memory serves, is where Kyle Freeman from Novalogic drew on for his work on Comanche. The technique worked well on non-accelerated (CPU only) hardware of the era, with the additive approach saving the pain of needing to keep a z buffer or fragment list. I used additive gaussian fields (restricted by bounding regions) for this back in the late 90's for audio visualizations in a ripper/player called "Siren" (back when we actually thought we could charge money for something like that).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |