Dave Story shows off a multi-lens array.
Written by Stephen Shankland
Today, if you want to trim all the distracting background out of a picture–say, the crowd behind your daughter playing soccer–you have to do a lot of artful selection with high-powered software such as Photoshop. But what if your computer understood the depth of the image, just as you did when you took the picture, and could be told to just erase everything that’s a certain distance behind your kid?
That’s one possible way to use technology that Adobe Systems has begun showing off–and that can be seen in video of a news conference posted by the Audioblog.fr site last week.
Dave Story, vice president of digital imaging product development at Adobe, showed off aspects of how the technology worked. First comes a lens which, like an insect’s compound eye, transmits several smaller images to the camera. The result is a photograph with multiple sub-views, each taken from a slightly different vantage point at exactly the same time.
From this information, the computer reconstructs a model of the scene in three dimensions.
Story then showed a video with significant transformations of an image based on this 3D understanding. The image had three major elements–a statue in the foreground, a statue in the middle distance, and a wall in the background. The video showed a simulation of a person shifting vantage point left and right–natural enough given that the multiple views captured that information.
Original story on Audioblog.fr (in French)