GLOSSARY
Gaussian Splatting
3D Gaussian Splatting represents a scene as millions of oriented, semi-transparent 3D blobs (Gaussians). Renders in real time with photoreal quality, and trains in minutes instead of hours.
Definition
Introduced by Kerbl et al. in 2023, 3D Gaussian Splatting replaces the neural network in a NeRF with an explicit set of 3D Gaussians, each with position, covariance (orientation + scale), color, and opacity. To render a view, the Gaussians are projected onto the image plane and alpha-blended in depth order.
Training optimizes the Gaussians' parameters so projected renders match input photos. The result is faster to train than NeRF, faster to render (real-time on consumer GPUs), and often sharper. Polycam, Luma, Postshot, and many capture tools have shifted to Gaussian splatting.
Why it matters
Splats are the current best representation for capturing real scenes for novel-view synthesis. For VR, AR, and 3D web experiences where you want to present a captured scene photorealistically, Gaussian splats render faster than NeRF and look at least as good.
Common confusion
Splats are not meshes — they are a point-cloud-like representation. You cannot 3D print a Gaussian splat directly. Mesh extraction from splats exists (SuGaR, GaussianSurfels) but is an active research area, and current results are noisy.
For object-scale 3D printing input, image-to-3D mesh models remain the better path. Splats shine for scene capture, not object reconstruction.