GLOSSARY
NeRF (Neural Radiance Fields)
A NeRF is a small neural network trained to map a 3D coordinate and viewing direction to a color and density. Querying it along rays produces photoreal novel views of the captured scene.
Definition
NeRFs were introduced in 2020 by Mildenhall et al. To capture a scene, you take a set of photos with known camera positions; an MLP is trained so that volumetric rendering of the field reproduces those photos. The trained network can then render the scene from any new camera position with realistic lighting and view-dependent reflections.
Variants — Instant-NGP, Mip-NeRF, Nerfacto — speed training from days to seconds and improve quality. NeRFs were the dominant capture representation for radiance fields between 2020 and roughly 2023, when Gaussian splatting started to replace them for many use cases.
Why it matters
NeRF made photoreal novel-view synthesis practical, kicking off a wave of research in neural 3D representations. Tools like Luma AI, Polycam, and NeRFStudio brought it to mainstream users for scene capture.
Common confusion
A NeRF is not a mesh. To 3D print a NeRF you have to extract one — typically by running marching cubes on the density field. The result is messy, blobby, and almost always needs cleanup. For printable output, image-to-3D meshing models are a better starting point than NeRF extraction.
NeRFs also do not work for objects in motion or scenes with changing lighting between captures.