|« previous | blog | next »|
August 7, 2007
In the folklore of photo history, we've learned that Ansel Adams used to airbrush out telephone poles and wires to make his nature landscapes look "better". Dorothea Lange made similar manipulations to give her photos of migrant farmworkers more emotional impact. And JFK conspiracy theorists believe that the cover photo of Lee Harvey Oswald on LIFE magazine was a not-so-slick composite photo that nevertheless convinced lots of people that Oswald posed for a photo holding the proposed murder weapon. And of course PhotoShop has opend up a Pandora's box that can undermine the credibility or "truth" of most any photo we see today.
Now an ingenious computer program scours the millions of images available on the web to offer dramatic ways to — automatically — cut and paste and blend parts of different images to create new convincing realities that are quite unreal.
James Hays and Alexei Efros of Carnegie Mellon University in Pittsburgh have just published a scientific paper announcing their breakthrough that "seamlessly" alters photos — automatically — to replace unwanted areas with semantically valid substitutes found on the web that "complete" the new picture in a very convincing manner.
Here is the Abstract from their paper, "Scene Completion Using Millions of Photographs":
What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.
And this is from their Introduction:
Every once in a while, we all wish we could erase something from our old photographs. A garbage truck right in the middle of a charming Italian piazza, an ex-boyfriend in a family photo, a political ally in a group portrait who has fallen out of favor [King 1997]. Other times, there is simply missing data in some areas of the image. An aged corner of an old photograph, a hole in an image-based 3D reconstruction due to occlusion, a dead bug on the camera lens. Image completion (also called inpainting or hole-filling) is the task of filling in or replacing an image region with new image data such that the modification can not be detected.
There are two fundamentally different strategies for image completion. The first aims to reconstruct, as accurately as possible, the data that should have been there, but somehow got occluded or corrupted. Methods attempting an accurate reconstruction have to use some other source of data in addition to the input image, such as video (using various background stabilization techniques, e.g. [Irani et al. 1995]) or multiple photographs of the same physical scene [Agarwala et al. 2004; Snavely et al. 2006].
The alternative is to try finding a plausible way to fill in the missing pixels, hallucinating data that could have been there. This is a much less easily quantifiable endeavor, relying instead on the studies of human visual perception.....
You can download the whole paper, with lots of sample photo manipulations, as a PDF (11MB), here.
Wonder what happens next...