We present a novel neural representation for light field content that enables compact storage and easy local reconstruction with high fidelity. We use a fully-connected neural network to learn the mapping function between each light field pixel's coordinates and its corresponding color values. Since neural networks that simply take in raw coordinates are unable to accurately learn data containing fine details, we present an input transformation strategy based on the Gegenbauer polynomials, which previously showed theoretical advantages over the Fourier basis. We conduct experiments that show our Gegenbauer-based design combined with sinusoidal activation functions leads to a better light field reconstruction quality than a variety of network designs, including those with Fourier-inspired techniques introduced by prior works. Moreover, our SInusoidal Gegenbauer NETwork, or SIGNET, can represent light field scenes more compactly than the state-of-the-art compression methods while maintaining a comparable reconstruction quality. SIGNET also innately allows random access to encoded light field pixels due to its functional design. We further demonstrate that SIGNET's super-resolution capability without any additional training.





  author={Feng, Brandon Y. and Varshney, Amitabh},
  booktitle={International Conference on Computer Vision (ICCV 2021)},
  title={SIGNET: Efficient Neural Representations for Light Fields},

Brandon Y. Feng and Amitabh Varshney. SIGNET: Efficient Neural Representations for Light Fields. International Conference on Computer Vision (ICCV 2021). 2021.