MoBluRF: A framework for creating sharp 4D reconstructions from blurry videos
Neural Radiance Fields (NeRF) is a fascinating technique that creates three-dimensional (3D) representations of a scene from a set of two-dimensional (2D) images, captured from different angles. It works by training a deep neural network to predict the color and density at any point in 3D space. To do this, it casts imaginary light rays from the camera through each pixel in all input images, sampling points along those rays with their 3D coordinates and viewing direction. Using this information, NeRF reconstructs the scene in 3D and can render it from entirely new perspectives, a process known as novel view synthesis (NVS).
Beyond still images, a video can also be used, with each frame of ...