Why Matting Matters
Speaker: Rick Szeliski , Microsoft Research
Image matting (e.g., blue-screen matting) has been a mainstay of Hollywood and the visual effects industry for decades, but its relevance to computer vision is not yet fully appreciated. In this talk, I argue that the mixing of pixel color values at the boundaries of objects (or even albedo changes) if a fundamental process that must be correctly modeled to make meaningful signal-level inferences about the visual world, as well as to support high-quality imaging transformations such as de-noising and de-blurring. Starting with Ted Adelson et al.'s seminal work on layered motion models, I review early stereo matching algorithms with transparency and matting (with Polina Golland), work on layered representations with matting (with Simon Baker and Anandan), through Larry Zitnick's 2-layer representation for 3D video. I then present our recent work (with Ce Liu et al.) on image de-noising using a segmented description of the image and Eric Bennett's et al.'s work on multi-image de-mosaicing, again using a local two-color model.