Scientific American is doing a deep, multi-story dive on privacy issues, and this one’s a doozy. Researchers have used an irregularly shaped shiny object, like a metal bowl or a bag of potato chips, to digitally reconstruct the room it’s been photographed in:
The mathematical model used to reconstruct environments can also approximate what a known object will look like—how light will reflect off it—when it is placed in new surroundings or is seen from a new angle. These two applications are linked. “The challenge of our research area is that everything so entangled,” says Jeong Joon Park, a Ph.D. student at the University of Washington’s Graphics and Imaging Laboratory (GRAIL). “You need to solve for lighting to get the good appearance. You need to have a good appearance model to get the good lighting. The answer might be to solve them all together—like we did.”
Park’s team posted a preprint of its study on the server arXiv.org earlier this year. And the paper was also accepted for presentation at the next annual IEEE Conference on Computer Vision and Pattern Recognition, which will be held remotely in June.
This technology also has applications for virtual reality. In a VR landscape, users might walk around an artificial scene while wearing a headset or “pick up” a digital artifact and turn it over in their hands. When they do so, the way that item looks should change—as it would in the real world—because of ambient light conditions. Park says his team’s system can calculate the character of that light to “give you a very realistic estimate of the appearance of any viewpoint of the scene.”
This process is called view reconstruction, or novel view synthesis.
Park and his colleagues put their novel view synthesis method to the test by using it to reconstruct images of the surrounding environment. They employed a video camera to film a variety of items—the aforementioned bag of chips, as well as soda cans, ceramic bowls and even a cat statue—then reconstructed the environment that produced those reflections with their model. The results were remarkably true to life. And more predictably, mirrorlike objects produced the most accurate images. “At first, we were pretty surprised because some of the environments we recovered have details that we cannot really recognize by looking at the bag of chips with our naked eye,” Park says.
He acknowledges that this technology has an obvious downside: the potential to turn an innocuous photograph into a violation of privacy. If researchers could perfectly reconstruct an environment based on reflections, any image containing a shiny object might inadvertently reveal much more than the photographer intended.