Our goal is to understand the illumination of an environment. By disentangling the illumination effect from other intrinsic properties (e.g. geometry, texture, color), we can better understand how human perceive the world. It also has several applications such as single image relighting, color editing, etc.

Our goal is to recover the illumination of an environment from a single image. In particular, we are interested in detecting the light sources, learning the contribution of each light source to each pixel, etc. Current publicly available datasets, however, are either small, synthetic, or sparsely annotated. Instead of relying on those data and directly training a fully supervised model, we take a different route and aim to recover the intrinsic properties of an image with very weak/no supervision.

Research Areas