Neural radiance field
A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. It is a graphics primitive which can be optimized from a set of 2D images to produce a 3D scene.[1] The NeRF model can learn the scene geometry, camera poses, and the reflectance properties of objects in a scene, which allows it to render new views of the scene from novel viewpoints.[2]
The method was originally introduced by a team from UC Berkeley, Google Research, and UC San Diego in 2020.[3]
See also
References
- Ogborn, Anne (30 November 2022). "NERF – Neural Radiance Fields". Hackaday.
- Knight, Will (7 February 2022). "A New Trick Lets Artificial Intelligence See in 3D". Wired.
- Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020). "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis". arXiv:2003.08934 [cs.CV].
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.