Graphics and Vision

Image-Driven Navigation of Analytical BRDF Models
Specifying parameters of analytic BRDF models is a difficult task as these parameters are often not intuitive for artists and their effect on appearance can be non-uniform. Ideally, a given step in the parameter space should produce a predictable and perceptually-uniform change in the rendered image. Systems that employ psychophysics have produced important advances in this direction; however, the requirement of user studies limits scalability of these approaches. In this work, we propose a new and intuitive method for designing material appearance. First, we define a computational metric between BRDFs that is based on rendered images of a scene under natural illumination. We show that our metric produces results that agree with previous perceptual studies.

Statistical Acquisition of Texture Appearance
We propose a simple method to acquire and reconstruct material appearance with sparsely sampled data. Our technique renders elaborate view- and light-dependent effects and faithfully reproduces materials such as fabrics and knitwears. Our approach uses sparse measurements to reconstruct a full six-dimensional Bidirectional Texture Function (BTF). Our reconstruction only require input images from the top view to be registered, which is easy to achieve with a fixed camera setup. Bidirectional properties are acquired from a sparse set of viewing directions through image statistics and therefore precise registrations for these views are unnecessary. Our technique is based on multi-scale histograms of image pyramids.

Opacity Light Fields
We present new hardware-accelerated techniques for rendering surface light fields with opacity hulls that allow for interactive visualization of objects that have complex reflectance properties and elaborate geometrical details. The opacity hull is a shape enclosing the object with view-dependent opacity parameterized onto that shape. We call the combination of opacity hulls and surface light fields the opacity light field. Opacity light fields are ideally suited for rendering of the visually complex objects and scenes obtained with 3D photography. We show how to implement opacity light fields in the framework of three surface light field rendering methods: viewdependent texture mapping, unstructured lumigraph rendering, and light field mapping.

A Frequency Analysis of Light Transport
We present a signal-processing framework for light transport. We study the frequency content of radiance and how it is affected by phenomena such as shading, occlusion, and travel in free space. This extends previous work that considered either spatial or angular dimensions, and offers a comprehensive treatment of both space and angle. We characterize how the radiance signal is modified as light propagates and interacts with objects. In particular, we show that occlusion (a multiplication in the primal space) amounts in the Fourier domain to a convolution by the frequency content of the blocker. Propagation in free space corresponds to a shear in the space-angle frequency domain, while reflection on curved objects performs a different shear along the angular frequency axis.

Motion Magnification
We present motion magnification, a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified. After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion. A novel measure of motion similarity groups even very small motions according to correlation over time, which often relates to physical cause.

Texture Design Using a Simplicial Complex of Morphable Textures
We present a system for designing novel textures in the space of textures induced by an input database. We capture the structure of the induced space by a simplicial complex where vertices of the simplices represent input textures. A user can generate new textures by interpolating within individual simplices. We propose a morphable interpolation for textures, which also defines a metric used to build the simplicial complex. To guarantee sharpness in interpolated textures, we enforce histograms of high-frequency content using a novel method for histogram interpolation. We allow users to continuously navigate in the simplicial complex and design new textures using a simple and efficient user interface.

Mesh-Based Inverse Kinematics
The ability to position a small subset of mesh vertices and produce a meaningful overall deformation of the entire mesh is a fundamental task in mesh editing and animation. However, the class of meaningful deformations varies from mesh to mesh and depends on mesh kinematics, which prescribes valid mesh configurations, and a selection mechanism for choosing among them. Drawing an analogy to the traditional use of skeleton-based inverse kinematics for posing skeletons, we define mesh-based inverse kinematics as the problem of finding meaningful mesh deformations that meet specified vertex constraints. Our solution relies on example meshes to indicate the class of meaningful deformations.

Defocus Video Matting
Video matting is the process of pulling a high-quality alpha matte and foreground from a video sequence. Current techniques require either a known background (e.g., a blue screen) or extensive user interaction (e.g., to specify known foreground and background elements). The matting problem is generally under-constrained, since not enough information has been collected at capture time. We propose a novel, fully autonomous method for pulling a matte using multiple synchronized video streams that share a point of view but differ in their plane of focus. The solution is obtained by directly minimizing the error in filter-based image formation equations, which are over-constrained by our rich data stream.

Deformation Transfer for Triangle Meshes
Deformation transfer applies the deformation exhibited by a source triangle mesh onto a different target triangle mesh. Our approach is general and does not require the source and target to share the same number of vertices or triangles, or to have identical connectivity. The user builds a correspondence map between the triangles of the source and those of the target by specifying a small set of vertex markers. Deformation transfer computes the set of transformations induced by the deformation of the source mesh, maps the transformations through the correspondence from the source to the target, and solves an optimization problem to consistently apply the transformations to the target shape.

Video Matching
This paper describes a method for bringing two videos (recorded at different times) into spatiotemporal alignment, then comparing and combining corresponding pixels for applications such as background subtraction, compositing, and increasing dynamic range. We align a pair of videos by searching for frames that best match according to a robust image registration process. This process uses locally weighted regression to interpolate and extrapolate high-likelihood image correspondences, allowing new correspondences to be discovered and refined. Image regions that cannot be matched are detected and ignored, providing robustness to changes in scene content and lighting, which allows a variety of new applications.