New AI tool generates realistic satellite images of future flooding

Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.

MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.

As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.

The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.

“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.

The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.

Generative adversarial images

The new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.

“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”

Flood hallucinations

In their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.

“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.

The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”

The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.

UK establishes LASR to counter AI security threats

The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to…

Klap AI Review: Transform Videos Into Viral Shorts Instantly

Have you ever spent hours editing a long video, painstakingly cutting it down to find the perfect moments, only to feel overwhelmed by the process? You’re not alone. Over 250,000 creators have used Klap AI to do just that but in a fraction of the time!…

Building an understanding of how drivers interact with emerging vehicle technologies

As the global conversation around assisted and automated vehicles (AVs) evolves, the MIT Advanced Vehicle Technology (AVT) Consortium continues to lead cutting-edge research aimed at understanding how drivers interact with emerging vehicle technologies. 

Since its launch in 2015, the AVT Consortium — a global academic-industry collaboration on developing a data-driven understanding of how drivers respond to commercially available vehicle technologies — has developed a data-driven approach to studying consumer attitudes and driving behavior across diverse populations, creating unique, multifaceted, and world-leading datasets to enable a diverse set of research applications. This research offers critical insights into consumer behaviors, system performance, and how technology impacts real-world driving, helping to shape the future of transportation.

“Cultivating public trust in AI will be the most significant factor for the future of assisted and automated vehicles,” says Bryan Reimer, AVT Consortium founder and a research engineer at the MIT AgeLab within the MIT Center for Transportation and Logistics (CTL). “Without trust, technology adoption will never reach its potential, and may stall. Our research aims to bridge this gap by understanding driver behavior and translating those insights into safer, more intuitive systems that enable safer, convenient, comfortable, sustainable and economical mobility.”

New insights from the J.D. Power Mobility Confidence Index Study

A recent Mobility Confidence Index Study, conducted in collaboration with J.D. Power, indicated that public readiness for autonomous vehicles has increased modestly after a two-year decline. While this shift is important for the broader adoption of AV technology, it is just one element of the ongoing research within the AVT Consortium, which is currently co-directed by Reimer, Bruce Mehler, and Pnina Gershon. The study, which surveys consumer attitudes toward autonomous vehicles, reflects a growing interest in the technology — but consumer perceptions are only part of the complex equation that AVT researchers are working to solve.

“The modest increase in AV readiness is encouraging,” Reimer notes. “But building lasting trust requires us to go deeper, examining how drivers interact with these systems in practice. Trust isn’t built on interest alone; it’s about creating a reliable and understandable user experience that people feel safe engaging with over time. Trust can be eroded quickly.”

Building a data-driven understanding of driving behavior

The AVT Consortium’s approach involves gathering extensive real-world data on driver interactions across age groups, experience levels, and vehicles. These data form one of the largest datasets of its kind, enabling researchers to study system performance, driver behavior, and attitudes toward assistive and automated technologies. AVT research aims to compare and contrast the benefits of various manufacturers’ embodiments of technologies. The vision for AVT research is that identifying the most promising attributes of various manufactured systems makes it easier and faster for new designs to evolve from the power of the positive.

“The work of the AVT Consortium exemplifies MIT’s commitment to understanding the human side of technology,” says Yossi Sheffi, director of the CTL. “By diving deep into driver behavior and attitudes toward assisted and automated systems, the AVT Consortium is laying the groundwork for a future where these technologies are both trusted and widely adopted. This research is essential for creating a transportation landscape that is safe, efficient, and adaptable to real-world human needs.”

The AVT Consortium’s insights have proven valuable in helping to shape vehicle design to meet the needs of real-world drivers. By understanding how drivers respond to these technologies, the consortium’s work supports the development of AI systems that feel trustworthy and intuitive, addressing drivers’ concerns and fostering confidence in the technology.

“We’re not just interested in whether people are open to using assistive and automated vehicle technologies,” adds Reimer. “We’re digging into how they use these technologies, what challenges they encounter, and how we can improve system design to make these technologies safer and more intuitive for all drivers.”

An interdisciplinary approach to vehicle technology

The AVT Consortium is not just a research effort — it is a community that brings together academic researchers, industry partners, and consumer organizations. By working with stakeholders from across the automotive, technology, and insurance industries, the AVT team can explore the full range of challenges and opportunities presented by emerging vehicle technologies to ensure a comprehensive, practical, and multi-stakeholder approach in the rapidly evolving mobility landscape. The interdisciplinary framework is also crucial to understanding how AI-driven systems can support humans beyond the car.

“As vehicle technologies evolve, it’s crucial to understand how they intersect with the everyday experiences of drivers across all ages,” says Joe Coughlin, director of the MIT AgeLab. “The AVT Consortium’s approach, focusing on both data and human-centered insights, reflects a profound commitment to creating mobility systems that genuinely serve people. The AgeLab is proud to support this work, which is instrumental in making future vehicle systems intuitive, safe, and empowering for everyone.”

“The future of mobility relies on our ability to build systems that drivers can trust and feel comfortable using,” says Reimer. “Our mission at AVT is not only to develop a data-driven understanding of how drivers across the lifespan use and respond to various vehicle technologies, but also to provide actionable insights into consumer attitudes to enhance safety and usability.”

Shaping the future of mobility

As assistive and automated vehicles become more common on our roads, the work of the AVT Consortium will continue to play a critical role in shaping the future of transportation. By prioritizing data-driven insights and human-centered design, the AVT Consortium is helping to lay the foundation for a safer, smarter, and more trusted mobility future.

MIT CTL is a world leader in supply chain management research and education, with over 50 years of expertise. The center’s work spans industry partnerships, cutting-edge research, and the advancement of sustainable supply chain practices.

John Brooks, Founder & CEO of Mass Virtual – Interview Series

John Brooks is the founder and CEO of Mass Virtual, a visionary technology leader with over 20 years of experience driving product and solution innovation across Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) for both commercial and defense sectors. Under John’s leadership, Mass…

Peering Inside AI: How DeepMind’s Gemma Scope Unlocks the Mysteries of AI

Artificial Intelligence (AI) is making its way into critical industries like healthcare, law, and employment, where its decisions have significant impacts. However, the complexity of advanced AI models, particularly large language models (LLMs), makes it difficult to understand how they arrive at those decisions. This “black…

Solved by CSS: Donuts Scopes

Donut scoping addresses the challenge of preventing parent styles from leaking to nested content. Originating from a 2011 concept by Nicole Sullivan, the issue has evolved, culminating in 2024’s @scope at-rule. This allows for more precise CSS styling, safeguarding content from unwanted inheritance while managing global…

OpenAI enhances AI safety with new red teaming methods

A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems. Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for…