Zooming in With Computational Photography

main image
Bookmark and Share

The Arlington Street Church in Boston as seen in the past and present. One of Durand's current projects focuses on rephotography or retaking a historical photo to show the progress of time.
Photo: Soonmin Bae (image on right)

Remember the Polaroid camera- that black box capable of spitting out an image within seconds of snapping the shutter? Once the star of social gatherings, a Polaroid camera now sits on a shelf in Bill Freeman’s office at CSAIL, a relic of a time gone by. The world of photography has transformed dramatically since Freeman worked for Polaroid in the field of electronic imaging during the 1980s. Digital cameras have since made photography an art form easily accessible to all, with images ready for viewing the moment they are captured, numerous computer programs available for editing and websites hosting platforms for millions to share their snapshots.

What CSAIL researchers like Bill Freeman and Frédo Durand are currently devoting their efforts to is bringing photography to an even higher level where widespread errors such as blur are but a distant memory. Computational photography, a means of expanding the capabilities of and tackling common problems in photography through algorithms, will be for the camera what the iPhone has been to the mobile phone market, according to Durand and Freeman.

Computational photography is the next generation in the quest for, “how to capture the world, to reveal things about the world that you couldn’t see otherwise, to make better pictures than you could have otherwise,” said Freeman.

As technologically advanced as digital photography may seem, it was modeled after the film camera, meaning that the camera’s functions stem from the lens. Computational photography will allow users to produce flawless images using formulas designed to achieve post-exposure perfection.

“Right now digital photography is very much modeled after what worked for film exposures, which is having a single snapshot where that’s the final answer. With computational photography it’s a set of data and you can go and process it,” said Freeman. “Through computational photography I think it will be much easier to get sharp images, to get high-dynamic range images, to get panoramic images, to get really nice photos from small hand-held devices.”

To get a better sense of Durand and Freeman’s work, scroll through the images on your digital camera. Current functions allow you to zoom in and out, and programs like Photoshop allow for cropping and retouching. Imagine if you could go even further into your photograph, removing imperfections such as blur and glare without compromising other aspects of the picture.

Durand and Freeman’s work with image deblurring focuses on making an image as faithful as possible, as opposed to simply sharpening it. To do this, CSAIL researchers delved into the tricky subject of defining what makes a sharp image. The answer came through examining the distribution of gradients and the difference between neighboring pixels to decipher what the image should look like and how the picture was blurred during exposure. Currently, the algorithm can correct an image where the photographer has caused blurring, but Durand and Freeman are working towards finding a solution for images where both the photographer and the subject are in motion.

caption

This progression shows the results of Durand and Freeman’s work with image deblurring. On the left there is a picture taken with a conventional camera of an object in motion, the center image was captured with a camera designed to capture a specific type of motion and remove blur, and the photo on the right shows the resulting image after deconvolution.

Durand and Freemand are also interested in extracting more information from a scene. To examine motion, they developed a technique to magnify it. Through this procedure, viewers can compare the movements of two cars with different loads or closely examine an infant’s breathing. As these algorithms allow users to expand specific movements exponentially, parents and doctors could use this approach to ensure a child is breathing normally at night.

“One thing I tend to say is that computation is the new optics,” said Durand. “If I wanted to see something smaller before I would just get a bigger microscope, but now we can create algorithms that reveal what would not be visible otherwise, and that’s really exciting.”

Computational photography not only enhances reality in snapshots, bringing landscapes and portraits into clearer focus, but can also help create new, virtual worlds. Image synthesis, a new technique being developed at CSAIL, allows users to create a virtual world by blending images together into a seamless panorama. One of Freeman’s working examples starts with an image of a tree-lined street. The scene zooms into a busy street market,and then zooms out, back down the street and into Asia, ending with the quintessential shot of the Beatles crossing Abbey Road.

Freeman sees both recreational and commercial uses for this technique, from organizing personal photographs to creating artificial worlds for computer games like Second Life. The program could also be used to explore dangerous areas in war zones or even outer space.

“I think that having algorithms that could understand what is inside a picture would have a huge impact on society. It’s making it easier for people to take great photos, but it has military applications too,” explained Freeman. “There’s a call for proposals for making augmented vision for a soldier, where the person could look and have the contrast fixed if there’s too much glare or have the contrast enhanced if it’s too dark or see all around or identify if there’s someone behind them with a gun.”

While one of the aims of computational photography is to allow photographers the ability to capture stunning snapshots, it also raises the issue of truth in photography. With numerous techniques available for consumers to doctor their images, how can viewers determine if what they are looking at is fact or fiction? Techniques are available to tell if an image has been tampered with, however, most are not widely available and doctoring is difficult to decipher.

Although computational photography has brought the issue of truth in photography increased awareness, Durand and Freeman believe image factuality has been an issue for years, and feel it is beneficial for consumers to be conscious of the problem.

Frédo Durand, Bill Freeman, Taeg Sang Cho and Anant Levin published a paper on their work with "Motion blur with orthogonal parabolic exposures" in 2010.

“I think it’s a very different issue if you are talking about photojournalism or if you are talking about fine art. With fine art, I don’t have any problem with people copying and pasting tons of different images and repainting every single pixel,” said Durand. “For photojournalism, this is a serious issue and I think it’s good that more and more journals have created ethical guidelines and are starting to have new tools to actually assess whether a photo has been retouched.”

As the technology behind computational photography improves, Durand and Freeman believe that the algorithms developed to perfect images after exposure will be implemented during the actual picture-taking process. Durand envisions a world where photographers, amateur and professional alike, regain control over their images. Additionally, Durand sees tools being developed to help amateur photographers mimic a professional’s style, professional photographers edit photos faster and even a replacement for what he believes is an outdated mechanism: the flash.

“We’re interested both in quantitative aspects of image quality and in more artistic qualities and building tools to help users. We’ve been working on using data from good photographers, trained photographers, to improve the images of casual users,” explained Durand. “We take examples of photos that are successful, whatever that means, and try to transfer some of those qualities to everyday users.”

Through computational photography comes an opportunity to delve further both into the fine art of photography and our understanding of the world around us. Freeman hopes the field will help humans get a better look at everything from motion to unexplored terrains like the Moon.

“It would be cool to have a motion magnifying video camera where you take a video of something and what you’d see is a motion magnified video of it,” said Freeman. “I would love it if computational photography moves beyond getting better photos and moves towards analyzing the world.”

For more information on Durand's work, visit: http://people.csail.mit.edu/fredo/. For more information on Freeman's work, visit: http://people.csail.mit.edu/billf/wtf.html.

February 18, 2011
Abby Abazorius, CSAIL