How many times have computers in science fiction done things that computers cannot do, like enhancing the resolution of an image from a security camera or picking out a perfect recording of a background conversation in a crowded room? Well, it turns out that we might be underestimating our software engineers after all. Two researchers at Carnegie Mellon University have figured out an algorithm that can detect where a picture has been taken, with an accuracy that's 30 times better than that of chance or random guesses. And their geolocation method may not be what you'd expect.

You'd think the best way to tell where a photo was shot would be to check for important buildings, landmarks in nature, or any signposts. Not so. Alexei A. Efros, assistant professor of computer science, and James Hays, a CS graduate student, developed their program to analyze the composition of photographs by creating and scanning histograms of image properties. Their algorithm examines the full profile of color and texture in each image, and also looks at various line features and geometric patterns. Then, it groups images of unknown location with images that have known details, and the geographical matching begins.

"We're not asking the computer to tell us what is depicted in the photo but to find other photos that look like it," Efros said. "It was surprising to us how effective this approach proved to be. Who would have guessed that similarity in overall image appearance would correlate to geographic proximity so well?"

So far, Efros and Hays have run their algorithm on a test set of 237 images, chosen for image quality, variety, and lack of easy geographical recognizability. When they ran their program, they successfully geolocated 16 percent of those test images to within 200 kilometers — and they also note that even geolocating an image within a country or region might still be helpful. These results, they say, are an encouraging jumping-off point for the larger field of geographical computer vision.


While a person might never be able to deconstruct and analyze a photo in this way, it's a piece of cake for a modern computer. And it probably wouldn't have been as easy to teach a computer to recognize photos the way a person would: It took a bit of outside-the-box thinking on the part of Efros and Hays to develop this new system. Perhaps we have only just scratched the surface of everything computers can do; to fully understand their extraordinary capabilities, it might be necessary to work within their limitations.

And in case you haven't figured it out yet, the picture above — one from the test set of Efros and Hays — was taken in the Netherlands.

Where in the World [Carnegie Mellon University]

IM2GPS Project