FiG-NeRF:
Figure Ground Neural Radiance Fields for
3D Object Category Modelling

1University of Washington, 2Google Research

International Conference on 3D Vision - 3DV, 2021

FiG-NeRF can learn high quality 3D object category models from casually captured images of objects.

Abstract

We investigate the use of Neural Radiance Fields (NeRF) to learn high quality 3D object category models from collections of input images. In contrast to previous work, we are able to do this whilst simultaneously separating foreground objects from their varying backgrounds. We achieve this via a 2-component NeRF model, FiG-NeRF, that prefers explanation of the scene as a geometrically constant background and a deformable foreground that represents the object category. We show that this method can learn accurate 3D object category models using only photometric supervision and casually captured images of the objects. Additionally, our 2-part decomposition allows the model to perform accurate and crisp amodal segmentation. We quantitatively evaluate our method with view synthesis and image fidelity metrics, using synthetic, lab-captured, and in-the-wild data. Our results demonstrate convincing 3D object category modelling that exceed the performance of existing methods.

Overview Video

More Results

We show extra results and baseline comparisons on three datasets. We demonstrate both instance interpolations (shape+color, shape, and color) and viewpoint interpolation/extrapolation. Please see the paper for more details.

Cars

Glasses

Cups

BibTeX

@inproceedings{xie2021fignerf,
  author      = {Xie, Christopher and Park, Keunhong and Martin-Brualla, Ricardo and Brown, Matthew},
  title       = {FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling},
  booktitle   = {International Conference on 3D Vision (3DV)},
  year        = {2021},
}