Satellite images of the expansion of large prison camps in Xinjiang, China, between 2016 and 2018 provided the strongest evidence of government repression against more than one million Muslims, leading to international condemnation and sanctions. Other aerial shots, such as Iran’s nuclear facilities and North Korean missile launch sites, have had a similar impact on world events. Now, image manipulation tools made possible by artificial intelligence can make it difficult to accept such images one by one.
In a study published online last month, Bo Zhao, a professor at the University of Washington, used artificial intelligence techniques in satellite imagery of several cities. to change. Zhao and colleagues exchanged features of images of Seattle and Beijing to display buildings in Seattle where they do not exist, and in Beijing, the structures were removed and replaced with green surfaces. This created a falsely so-called deepfake sight that eerily resembles the real one.
Zhao used an algorithm called CycleGAN to manipulate satellite images. The algorithm developed by researchers at the University of Berkeley is widely used for all kinds of visual tricks. They created an artificial neural network to recognize key features of certain images, such as a painting style or features on a particular type of map. Another algorithm then helps refine the performance of the first algorithm by trying to detect if an image has been manipulated.
As with deepfake video clips that show people in supposedly compromising situations, such images can mislead governments or spread on social media, raising misinformation or doubts about real visual information.
“I absolutely think this is a big problem that may not affect the average citizen tomorrow, but it will play a much bigger role behind the scenes in the next decade, “said Grant McKenzie, a spatial data science lecturer at McGill University in Canada who was not interested in that research.
” Imagine a world , where a state government or other actor can manipulate the images in a way that seems realistic so that it does not show any meaningful information, or another arrangement display. I’m not entirely sure what can be done right now to stop this, ”McKenzie said.
Some roughly manipulated satellite images have already spread like a virus on social media, including a photo allegedly illuminated during the Hindu celebration of Diwali. It shows India and has obviously been changed by hand, perhaps it is only a matter of time before more sophisticated “deepfake” satellite imagery is used to hide weapons depots, for example, or to misleadingly justify military action.
Gabrielle Lim, Harvard University Media Manipulation According to his researcher, maps can be used to deceive without artificial intelligence, which are already widespread on the Internet, and manipulated aerial imagery can also be of commercial importance, as such images are extremely valuable for digital mapping, weather monitoring and investment management.
U.S. intelligence has acknowledged that manipulated satellite images pose an increasing threat. “Opponents may use false or manipulated information to influence our image of the world,” said a spokesman for the Pentagon’s National Space and Intelligence Agency.
The spokesman said forensic analysis could help identify falsified images. , but recognizes that the spread of automated counterfeiting may necessitate new approaches. Software may be able to identify telltale signs of manipulation, such as visual artifacts or altering data in a file. However, artificial intelligence can learn to remove these signals, creating a cat-and-mouse game between counterfeiters and counterfeiters.
“The importance of knowing, authenticating, and trusting our resources is growing. , and technology has a major role to play in helping this, “says the spokesman.
Filtering out images manipulated by artificial intelligence has become a major area of scientific, industrial and governmental research. Big technology companies like Facebook, which are worried about spreading misinformation, support efforts to automate the identification of deepfake videos
Zhao at the University of Washington plans to develop options for automatic identification of deepfake satellite images . He said that studying the temporal and spatial changes in the recordings could help identify suspicious features.
However, the researcher believes that even if the government has the technology to detect counterfeits, the public may be unexpectedly exposed to counterfeiting. “Having a satellite image that spreads widely on social media can be a problem,” he said.
Hardware, software, tests, curiosities and colorful news from the IT world by clicking here!