Deepfake maps can really affect your sense of the world | GeekComparison

Extreme close up photo of a road map.
enlarge A macro shot of the city of Seattle, Washington, on a map.

Satellite images showing the expansion of major detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, leading to international condemnation and sanctions.

Other aerial photographs — of nuclear facilities in Iran and missile sites in North Korea, for example — have had a similar impact on world events. Now, image manipulation tools enabled by artificial intelligence may make it harder to accept such images outright.

In an article published online last month, University of Washington professor Bo Zhao used AI techniques similar to those used to create so-called deepfakes to alter satellite images of various cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all kinds of image cheating. It trains an artificial neural network to recognize the main features of certain images, such as a painting style or the features on a particular type of map. Another algorithm then helps fine-tune the former’s performance by trying to detect when an image has been manipulated.

As with deepfake video clips that claim to show people in compromising situations, such footage can mislead governments or spread on social media, spreading misinformation or doubt about real visual information.

“I definitely think this is a big problem that may not affect the average citizen tomorrow, but will play a much bigger role behind the scenes in the next decade,” said Grant McKenzie, an assistant professor of spatial data science at the University of Groningen. McGill University in Canada, who was not involved in the work.

“Imagine a world where a state government, or any other actor, can realistically manipulate images to display nothing or a different layout there,” says McKenzie. “I’m not quite sure what can be done at this point to stop it.”

A few crudely manipulated satellite images have already spread virally on social media, including a photo claiming to show India lit up during the Hindu festival of Diwali, which has apparently been hand-touched. It may be a matter of time before much more sophisticated ‘deepfake’ satellite imagery is used to, for example, hide weapon installations or falsely justify military action.

Gabrielle Lim, a researcher at Harvard Kennedy School’s Shorenstein Center who focuses on media manipulation, says cards can be used to deceive without AI. She points to images circulated online suggesting Alexandria Ocasio-Cortez was not where she claimed to be during the January 6 Capitol Rebellion, as well as Chinese passports showing a disputed area of ​​the South China Sea as part of China. “Not a fancy technology, but it can achieve similar goals,” says Lim.

Manipulated aerial images can also have commercial significance, as such images are immensely valuable for digital maps, tracking weather systems and guiding investments.

US intelligence has acknowledged that manipulated satellite images are a growing threat. “Adversaries can use false or manipulated information to influence our understanding of the world,” said a spokesman for the National Geospatial-Intelligence Agency, part of the Pentagon that oversees the collection, analysis and dissemination of geospatial information.

The spokesperson says forensic analysis can help identify counterfeit images, but acknowledges that the rise of automated counterfeits may require new approaches. Software may be able to spot telltale signs of tampering, such as visual artifacts or changes to the data in a file. But AI can learn to remove such signals, creating a cat-and-mouse game between fakers and fake spotters.

“The importance of knowing, validating and trusting our resources is only growing, and technology plays a huge role in achieving that,” the spokesperson said.

Spotting images manipulated with AI has become an important area of ​​academic, industrial and government research. Major tech companies like Facebook, concerned about spreading misinformation, are supporting efforts to automate the identification of deepfake videos.

Zhao of the University of Washington plans to explore ways to automatically identify deepfake satellite images. He says studying how landscapes change over time can help highlight suspicious features. “Temporal-spatial patterns will be very important,” he says.

However, Zhao notes that even if the government has the technology needed to spot such counterfeits, the public may be caught off guard. “If there’s a satellite image that’s widespread on social media, that could be a problem,” he says.

This story first appeared on wired.com

Leave a Comment