Improving health, one machine learning system at a time

Captivated as a child by video games and puzzles, Marzyeh Ghassemi was also fascinated at an early age in health. Luckily, she found a path where she could combine the two interests. 

“Although I had considered a career in health care, the pull of computer science and engineering was stronger,” says Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES) and principal investigator at the Laboratory for Information and Decision Systems (LIDS). “When I found that computer science broadly, and AI/ML specifically, could be applied to health care, it was a convergence of interests.”

Today, Ghassemi and her Healthy ML research group at LIDS work on the deep study of how machine learning (ML) can be made more robust, and be subsequently applied to improve safety and equity in health.

Growing up in Texas and New Mexico in an engineering-oriented Iranian-American family, Ghassemi had role models to follow into a STEM career. While she loved puzzle-based video games — “Solving puzzles to unlock other levels or progress further was a very attractive challenge” — her mother also engaged her in more advanced math early on, enticing her toward seeing math as more than arithmetic.

“Adding or multiplying are basic skills emphasized for good reason, but the focus can obscure the idea that much of higher-level math and science are more about logic and puzzles,” Ghassemi says. “Because of my mom’s encouragement, I knew there were fun things ahead.”

Ghassemi says that in addition to her mother, many others supported her intellectual development. As she earned her undergraduate degree at New Mexico State University, the director of the Honors College and a former Marshall Scholar — Jason Ackelson, now a senior advisor to the U.S. Department of Homeland Security — helped her to apply for a Marshall Scholarship that took her to Oxford University, where she earned a master’s degree in 2011 and first became interested in the new and rapidly evolving field of machine learning. During her PhD work at MIT, Ghassemi says she received support “from professors and peers alike,” adding, “That environment of openness and acceptance is something I try to replicate for my students.”

While working on her PhD, Ghassemi also encountered her first clue that biases in health data can hide in machine learning models.

She had trained models to predict outcomes using health data, “and the mindset at the time was to use all available data. In neural networks for images, we had seen that the right features would be learned for good performance, eliminating the need to hand-engineer specific features.”

During a meeting with Leo Celi, principal research scientist at the MIT Laboratory for Computational Physiology and IMES and a member of Ghassemi’s thesis committee, Celi asked if Ghassemi had checked how well the models performed on patients of different genders, insurance types, and self-reported races.

Ghassemi did check, and there were gaps. “We now have almost a decade of work showing that these model gaps are hard to address — they stem from existing biases in health data and default technical practices. Unless you think carefully about them, models will naively reproduce and extend biases,” she says.

Ghassemi has been exploring such issues ever since.

Her favorite breakthrough in the work she has done came about in several parts. First, she and her research group showed that learning models could recognize a patient’s race from medical images like chest X-rays, which radiologists are unable to do. The group then found that models optimized to perform well “on average” did not perform as well for women and minorities. This past summer, her group combined these findings to show that the more a model learned to predict a patient’s race or gender from a medical image, the worse its performance gap would be for subgroups in those demographics. Ghassemi and her team found that the problem could be mitigated if a model was trained to account for demographic differences, instead of being focused on overall average performance — but this process has to be performed at every site where a model is deployed.

“We are emphasizing that models trained to optimize performance (balancing overall performance with lowest fairness gap) in one hospital setting are not optimal in other settings. This has an important impact on how models are developed for human use,” Ghassemi says. “One hospital might have the resources to train a model, and then be able to demonstrate that it performs well, possibly even with specific fairness constraints. However, our research shows that these performance guarantees do not hold in new settings. A model that is well-balanced in one site may not function effectively in a different environment. This impacts the utility of models in practice, and it’s essential that we work to address this issue for those who develop and deploy models.”

Ghassemi’s work is informed by her identity.

“I am a visibly Muslim woman and a mother — both have helped to shape how I see the world, which informs my research interests,” she says. “I work on the robustness of machine learning models, and how a lack of robustness can combine with existing biases. That interest is not a coincidence.”

Regarding her thought process, Ghassemi says inspiration often strikes when she is outdoors — bike-riding in New Mexico as an undergraduate, rowing at Oxford, running as a PhD student at MIT, and these days walking by the Cambridge Esplanade. She also says she has found it helpful when approaching a complicated problem to think about the parts of the larger problem and try to understand how her assumptions about each part might be incorrect.

“In my experience, the most limiting factor for new solutions is what you think you know,” she says. “Sometimes it’s hard to get past your own (partial) knowledge about something until you dig really deeply into a model, system, etc., and realize that you didn’t understand a subpart correctly or fully.”

As passionate as Ghassemi is about her work, she intentionally keeps track of life’s bigger picture.

“When you love your research, it can be hard to stop that from becoming your identity — it’s something that I think a lot of academics have to be aware of,” she says. “I try to make sure that I have interests (and knowledge) beyond my own technical expertise.

“One of the best ways to help prioritize a balance is with good people. If you have family, friends, or colleagues who encourage you to be a full person, hold on to them!”

Having won many awards and much recognition for the work that encompasses two early passions — computer science and health — Ghassemi professes a faith in seeing life as a journey.

“There’s a quote by the Persian poet Rumi that is translated as, ‘You are what you are looking for,’” she says. “At every stage of your life, you have to reinvest in finding who you are, and nudging that towards who you want to be.”

A blueprint for better cancer immunotherapies

Immune checkpoint blockade (ICB) therapies can be very effective against some cancers by helping the immune system recognize cancer cells that are masquerading as healthy cells. 

T cells are built to recognize specific pathogens or cancer cells, which they identify from the short fragments of proteins presented on their surface. These fragments are often referred to as antigens. Healthy cells will will not have the same short fragments or antigens on their surface, and thus will be spared from attack. 

Even with cancer-associated antigens studding their surfaces, tumor cells can still escape attack by presenting a checkpoint protein, which is built to turn off the T cell. Immune checkpoint blockade therapies bind to these “off-switch” proteins and allow the T cell to attack.

Researchers have established that how cancer-associated antigens are distributed throughout a tumor determines how it will respond to checkpoint therapies. Tumors with the same antigen signal across most of its cells respond well, but heterogeneous tumors with subpopulations of cells that each have different antigens, do not. The overwhelming majority of tumors fall into the latter category and are characterized by heterogenous antigen expression. Because the mechanisms behind antigen distribution and tumor response are poorly understood, efforts to improve ICB therapy response in heterogenous tumors have been hindered.

In a new study, MIT researchers analyzed antigen expression patterns and associated T cell responses to better understand why patients with heterogenous tumors respond poorly to ICB therapies. In addition to identifying specific antigen architectures that determine how immune systems respond to tumors, the team developed an RNA-based vaccine that, when combined with ICB therapies, was effective at controlling tumors in mouse models of lung cancer.

Stefani Spranger, associate professor of biology and member of MIT’s Koch Institute for Integrative Cancer Research, is the senior author of the study, appearing recently in the Journal for Immunotherapy of Cancer. Other contributors include Koch Institute colleague Forest White, the Ned C. (1949) and Janet Bemis Rice Professor and professor of biological engineering at MIT, and Darrell Irvine, professor of immunology and microbiology at Scripps Research Institute and a former member of the Koch Institute.

While RNA vaccines are being evaluated in clinical trials, current practice of antigen selection is based on the predicted stability of antigens on the surface of tumor cells. 

“It’s not so black-and-white,” says Spranger. “Even antigens that don’t make the numerical cut-off could be really valuable targets. Instead of just focusing on the numbers, we need to look inside the complex interplays between antigen hierarchies to uncover new and important therapeutic strategies.”

Spranger and her team created mouse models of lung cancer with a number of different and well-defined expression patterns of cancer-associated antigens in order to analyze how each antigen impacts T cell response. They created both “clonal” tumors, with the same antigen expression pattern across cells, and “subclonal” tumors that represent a heterogenous mix of tumor cell subpopulations expressing different antigens. In each type of tumor, they tested different combinations of antigens with strong or weak binding affinity to MHC.

The researchers found that the keys to immune response were how widespread an antigen is expressed across a tumor, what other antigens are expressed at the same time, and the relative binding strength and other characteristics of antigens expressed by multiple cell populations in the tumor

As expected, mouse models with clonal tumors were able to mount an immune response sufficient to control tumor growth when treated with ICB therapy, no matter which combinations of weak or strong antigens were present. However, the team discovered that the relative strength of antigens present resulted in dynamics of competition and synergy between T cell populations, mediated by immune recognition specialists called cross-presenting dendritic cells in tumor-draining lymph nodes. In pairings of two weak or two strong antigens, one resulting T cell population would be reduced through competition. In pairings of weak and strong antigens, overall T cell response was enhanced. 

In subclonal tumors, with different cell populations emitting different antigen signals, competition rather than synergy was the rule, regardless of antigen combination. Tumors with a subclonal cell population expressing a strong antigen would be well-controlled under ICB treatment at first, but eventually parts of the tumor lacking the strong antigen began to grow and developed the ability evade immune attack and resist ICB therapy.

Incorporating these insights, the researchers then designed an RNA-based vaccine to be delivered in combination with ICB treatment with the goal of strengthening immune responses suppressed by antigen-driven dynamics. Strikingly, they found that no matter the binding affinity or other characteristics of the antigen targeted, the vaccine-ICB therapy combination was able to control tumors in mouse models. The widespread availability of an antigen across tumor cells determined the vaccine’s success, even if that antigen was associated with weak immune response.

Analysis of clinical data across tumor types showed that the vaccine-ICB therapy combination may be an effective strategy for treating patients with tumors with high heterogeneity. Patterns of antigen architectures in patient tumors correlated with T cell synergy or competition in mice models and determined responsiveness to ICB in cancer patients. In future work with the Irvine laboratory at the Scripps Research Institute, the Spranger laboratory will further optimize the vaccine with the aim of testing the therapy strategy in the clinic. 

To design better water filters, MIT engineers look to manta rays

Filter feeders are everywhere in the animal world, from tiny crustaceans and certain types of coral and krill, to various molluscs, barnacles, and even massive basking sharks and baleen whales. Now, MIT engineers have found that one filter feeder has evolved to sift food in ways that could improve the design of industrial water filters.

In a paper appearing this week in the Proceedings of the National Academy of Sciences, the team characterizes the filter-feeding mechanism of the mobula ray — a family of aquatic rays that includes two manta species and seven devil rays. Mobula rays feed by swimming open-mouthed through plankton-rich regions of the ocean and filtering plankton particles into their gullet as water streams into their mouths and out through their gills.

The floor of the mobula ray’s mouth is lined on either side with parallel, comb-like structures, called plates, that siphon water into the ray’s gills. The MIT team has shown that the dimensions of these plates may allow for incoming plankton to bounce all the way across the plates and further into the ray’s cavity, rather than out through the gills. What’s more, the ray’s gills absorb oxygen from the outflowing water, helping the ray to simultaneously breathe while feeding.

“We show that the mobula ray has evolved the geometry of these plates to be the perfect size to balance feeding and breathing,” says study author Anette “Peko” Hosoi, the Pappalardo Professor of Mechanical Engineering at MIT.

The engineers fabricated a simple water filter modeled after the mobula ray’s plankton-filtering features. They studied how water flowed through the filter when it was fitted with 3D-printed plate-like structures. The team took the results of these experiments and drew up a blueprint, which they say designers can use to optimize industrial cross-flow filters, which are broadly similar in configuration to that of the mobula ray.

“We want to expand the design space of traditional cross-flow filtration with new knowledge from the manta ray,” says lead author and MIT postdoc Xinyu Mao PhD ’24. “People can choose a parameter regime of the mobula ray so they could potentially improve overall filter performance.”

Hosoi and Mao co-authored the new study with Irmgard Bischofberger, associate professor of mechanical engineering at MIT.

A better trade-off

The new study grew out of the group’s focus on filtration during the height of the Covid pandemic, when the researchers were designing face masks to filter out the virus. Since then, Mao has shifted focus to study filtration in animals and how certain filter-feeding mechanisms might improve filters used in industry, such as in water treatment plants.

Mao observed that any industrial filter must strike a balance between permeability (how easily fluid can flow through a filter), and selectivity (how successful a filter is at keeping out particles of a target size). For instance, a membrane that is studded with large holes might be highly permeable, meaning a lot of water can be pumped through using very little energy. However, the membrane’s large holes would let many particles through, making it very low in selectivity. Likewise, a membrane with much smaller pores would be more selective yet also require more energy to pump the water through the smaller openings.

“We asked ourselves, how do we do better with this tradeoff between permeability and selectivity?” Hosoi says.

As Mao looked into filter-feeding animals, he found that the mobula ray has struck an ideal balance between permeability and selectivity: The ray is highly permeable, in that it can let water into its mouth and out through its gills quickly enough to capture oxygen to breathe. At the same time, it is highly selective, filtering and feeding on plankton rather than letting the particles stream out through the gills.

The researchers realized that the ray’s filtering features are broadly similar to that of industrial cross-flow filters. These filters are designed such that fluid flows across a permeable membrane that lets through most of the fluid, while any polluting particles continue flowing across the membrane and eventually out into a reservoir of waste.

The team wondered whether the mobula ray might inspire design improvements to industrial cross-flow filters. For that, they took a deeper dive into the dynamics of mobula ray filtration.

A vortex key

As part of their new study, the team fabricated a simple filter inspired by the mobula ray. The filter’s design is what engineers refer to as a “leaky channel” — effectively, a pipe with holes along its sides. In this case, the team’s “channel” consists of two flat, transparent acrylic plates that are glued together at the edges, with a slight opening between the plates through which fluid can be pumped. At one end of the channel, the researchers inserted 3D-printed structures resembling the grooved plates that run along the floor of the mobula ray’s mouth.

The team then pumped water through the channel at various rates, along with colored dye to visualize the flow. They took images across the channel and observed an interesting transition: At slow pumping rates, the flow was “very peaceful,” and fluid easily slipped through the grooves in the printed plates and out into a reservoir. When the researchers increased the pumping rate, the faster-flowing fluid did not slip through, but appeared to swirl at the mouth of each groove, creating a vortex, similar to a small knot of hair between the tips of a comb’s teeth.

“This vortex is not blocking water, but it is blocking particles,” Hosoi explains. “Whereas in a slower flow, particles go through the filter with the water, at higher flow rates, particles try to get through the filter but are blocked by this vortex and are shot down the channel instead. The vortex is helpful because it prevents particles from flowing out.”

The team surmised that vortices are the key to mobula rays’ filter-feeding ability. The ray is able to swim at just the right speed that water, streaming into its mouth, can form vortices between the grooved plates. These vortices effectively block any plankton particles — even those that are smaller than the space between plates. The particles then bounce across the plates and head further into the ray’s cavity, while the rest of the water can still flow between the plates and out through the gills.

The researchers used the results of their experiments, along with dimensions of the filtering features of mobula rays, to develop a blueprint for cross-flow filtration.

“We have provided practical guidance on how to actually filter as the mobula ray does,” Mao offers.

“You want to design a filter such that you’re in the regime where you generate vortices,” Hosoi says. “Our guidelines tell you: If you want your plant to pump at a certain rate, then your filter has to have a particular pore diameter and spacing to generate vortices that will filter out particles of this size. The mobula ray is giving us a really nice rule of thumb for rational design.”

This work was supported, in part, by the U.S. National Institutes of Health, and the Harvey P. Greenspan Fellowship Fund. 

Professor Emeritus James Harris, a scholar of Spanish language, dies at 92

James Wesley “Jim” Harris PhD ’67, professor emeritus of Spanish and linguistics, passed away on Nov. 10. He was 92.

Harris attended the University of Georgia, the Instituto Tecnológico de Estudios Superiores de Monterrey, and the Universidad Nacional Autónoma de México. He later earned a master’s degree in linguistics from Louisiana State University and a PhD in linguistics from MIT.

Harris joined the MIT faculty as an assistant professor in 1967, where he remained until his retirement in 1996. During his tenure, he served as head of what was then called the Department of Foreign Languages and Literatures.

“I met Jim when I came to MIT in 1977 as department head of the neonatal Department of Linguistics and Philosophy,” says Samuel Jay Keyser, MIT professor emeritus of linguistics. “Throughout his career in the department, he never relinquished his connection to the unit that first employed him at MIT.”

In his early days at MIT, when French, German, and Russian dominated as elite “languages of science and world literature,” Harris championed, over some opposition, the introduction of Spanish language and literature courses.

He later oversaw the inclusion of Japanese and Chinese courses as language offerings at MIT. He promoted undergraduate courses in linguistics, leading to a full undergraduate degree program and later broadening the focus of the prestigious PhD program. 

His research in linguistics centered on theoretical phonology and morphology. His books, presentations at professional meetings, and articles in peer-reviewed journals were among the most discussed — in both positive and negative assessments, as he noted — by prominent scholars in the field. The ability to teach complex technical material comfortably in Spanish, plus the status of an MIT professorship, resulted in invitations to teach at universities across Spain and Latin America. He was also highly valued as a member of the editorial boards of several professional journals.

“I remember Jim most of all for being the consummate scholar,” Keyser says. “His articles were models of argumentation. They were assembled with all the precision of an Inca wall and all the beauty of a Faberge Egg. You couldn’t slip a credit card through any of its arguments, they were so superbly sculpted.”

Having achieved national recognition as an English-Spanish bilingual teacher and teacher-trainer, Harris was engaged as a writer at the Modern Language Materials Development Center in New York. Later, he co-authored, with Guillermo Segreda, a series of popular college-level Spanish textbooks.

“Harris belonged to Noam Chomsky and Morris Halle’s first generation of graduate students,” says MIT linguist Michael John Kenstowicz. “Together they overturned the distributionalist model of the structuralists in favor of ordered generative rules.”

After retiring from MIT, he remained internationally recognized as a highly influential figure in the area of Romance linguistics, and “el decano” (“the dean”) of Spanish phonology.

Harris was married to Florence Warshawsky Harris for 50 years until her passing in 2020. In 2011, in celebration of the program’s 50th anniversary, they partnered to prepare and publish a detailed history of the linguistics program’s origins. Warshawsky Harris, formerly an MIT graduate student, also edited Chomsky and Halle’s influential “The Sound Pattern on English” and numerous other important linguistic texts.

Harris’ scholarship was widely recognized in a diverse group of scholarly articles and textbooks he authored, co-authored, edited, and published.

Harris was born outside Atlanta, Georgia, in 1932. During the Korean War, he performed his military service as the clarinet and saxophone instructor at the U.S. Naval School of Music in Washington. After his discharge, he directed the band at the Charlotte Hall School in Maryland, where he also taught Spanish, French, and Latin.

Harris is survived by ​​his daughter, Lynn Corinne Harris, his son-in-law, Rabbi David Adelson, and his grandchildren, Bee Adelson and Sam Harris.

New solar projects will grow renewable energy generation for four major campus buildings

In the latest step to implement commitments made in MIT’s Fast Forward climate action plan, staff from the Department of Facilities; Office of Sustainability; and Environment, Health and Safety Office are advancing new solar panel installations this fall and winter on four major campus buildings: The Stratton Student Center (W20), the Dewey Library building (E53), and two newer buildings, New Vassar (W46) and the Theater Arts building (W97).

These four new installations, in addition to existing rooftop solar installations on campus, are “just one part of our broader strategy to reduce MIT’s carbon footprint and transition to clean energy,” says Joe Higgins, vice president for campus services and stewardship.

The installations will not only meet but exceed the target set for total solar energy production on campus in the Fast Forward climate action plan that was issued in 2021. With an initial target of 500 kilowatts of installed solar capacity on campus, the new installations, along with those already in place, will bring the total output to roughly 650 kW, exceeding the goal. The solar installations are an important facet of MIT’s approach to eliminating all direct campus emissions by 2050.

The process of advancing to the stage of placing solar panels on campus rooftops is much more complex than just getting them installed on an ordinary house. The process began with a detailed assessment of the potential for reducing the campus greenhouse gas footprint. A first cut eliminated rooftops that were too shaded by trees or other buildings. Then, the schedule for regular replacement of roofs had to be taken into account — it’s better to put new solar panels on top of a roof that will not need replacement in a few years. Other roofs, especially lab buildings, simply had too much existing equipment on them to allow a large area of space for solar panels.

Randa Ghattas, senior sustainability project manager, and Taya Dixon, assistant director for capital budgets and contracts within the Department of Facilities, spearheaded the project. Their initial assessment showed that there were many buildings identified with significant solar potential, and it took the impetus of the Fast Forward plan to kick things into action. 

Even after winnowing down the list of campus buildings based on shading and the life cycle of roof replacements, there were still many other factors to consider. Some buildings that had ample roof space were of older construction that couldn’t bear the loads of a full solar installation without significant reconstruction. “That actually has proved trickier than we thought,” Ghattas says. For example, one building that seemed a good candidate, and already had some solar panels on it, proved unable to sustain the greater weight and wind loads of a full solar installation. Structural capacity, she says, turned out to be “probably the most important” factor in this case.

The roofs on the Student Center and on the Dewey Library building were replaced in the last few years with the intention of the later addition of solar panels. And the two newer buildings were designed from the beginning with solar in mind, even though the solar panels were not part of the initial construction. “The designs were built into them to accommodate solar,” Dixon says, “so those were easy options for us because we knew the buildings were solar-ready and could support solar being integrated into their systems, both the electrical system and the structural system of the roof.”

But there were also other considerations. The Student Center is considered a historically significant building, so the installation had to be designed so that it was invisible from street level, even including a safety railing that had to be built around the solar array. But that was not a problem. “It was fine for this building,” Ghattas says, because it turned out that the geometry of the building and the roofs hid the safety railing from view below.

Each installation will connect directly to the building’s electrical system, and thus into the campus grid. The power they produce will be used in the buildings they are on, though none will be sufficient to fully power its building. Overall, the new installations, in addition to the existing ones on the MIT Sloan School of Management building (E62) and the Alumni Pool (57) and the planned array on the new Graduate Junction dorm (W87-W88), will be enough to power 5 to 10 percent of the buildings’ electric needs, and offset about 190 metric tons of carbon dioxide emissions each year, Ghattas says. This is equivalent to the electricity use of 35 homes annually.

Each building installation is expected to take just a couple of weeks. “We’re hopeful that we’re going to have everything installed and operational by the end of this calendar year,” she says.

Other buildings could be added in coming years, as their roof replacement cycles come around. With the lessons learned along the way in getting to this point, Ghattas says, “now that we have a system in place, hopefully it’s going to be much easier in the future.”

Higgins adds that “in parallel with the solar projects, we’re working on expanding electric vehicle charging stations and the electric vehicle fleet and reducing energy consumption in campus buildings.”

Besides the on-campus improvements, he says, “MIT is focused on both the local and the global.” In addition to solar installations on campus buildings, which can only mitigate a small portion of campus emissions, “large-scale aggregation partnerships are key to moving the actual market landscape for adding cleaner energy generation to power grids,” which must ultimately lead to zero emissions, he says. “We are spurring the development of new utility-grade renewable energy facilities in regions with high carbon-intensive electrical grids. These projects have an immediate and significant impact in the urgently needed decarbonization of regional power grids.”

Higgins says that other technologies, strategies, and practices are being evaluated for heating, cooling, and power for the campus, “with zero carbon emissions by 2050, utilizing cleaner energy sources.” He adds that these campus initiatives “are part of MIT’s larger Climate Project, aiming to drive progress both on campus and beyond, advancing broader partnerships, new market models, and informing approaches to climate policy.” 

New AI tool generates realistic satellite images of future flooding

Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.

MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.

As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.

The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.

“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.

The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.

Generative adversarial images

The new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.

“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”

Flood hallucinations

In their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.

“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.

The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”

The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.

Building an understanding of how drivers interact with emerging vehicle technologies

As the global conversation around assisted and automated vehicles (AVs) evolves, the MIT Advanced Vehicle Technology (AVT) Consortium continues to lead cutting-edge research aimed at understanding how drivers interact with emerging vehicle technologies. 

Since its launch in 2015, the AVT Consortium — a global academic-industry collaboration on developing a data-driven understanding of how drivers respond to commercially available vehicle technologies — has developed a data-driven approach to studying consumer attitudes and driving behavior across diverse populations, creating unique, multifaceted, and world-leading datasets to enable a diverse set of research applications. This research offers critical insights into consumer behaviors, system performance, and how technology impacts real-world driving, helping to shape the future of transportation.

“Cultivating public trust in AI will be the most significant factor for the future of assisted and automated vehicles,” says Bryan Reimer, AVT Consortium founder and a research engineer at the MIT AgeLab within the MIT Center for Transportation and Logistics (CTL). “Without trust, technology adoption will never reach its potential, and may stall. Our research aims to bridge this gap by understanding driver behavior and translating those insights into safer, more intuitive systems that enable safer, convenient, comfortable, sustainable and economical mobility.”

New insights from the J.D. Power Mobility Confidence Index Study

A recent Mobility Confidence Index Study, conducted in collaboration with J.D. Power, indicated that public readiness for autonomous vehicles has increased modestly after a two-year decline. While this shift is important for the broader adoption of AV technology, it is just one element of the ongoing research within the AVT Consortium, which is currently co-directed by Reimer, Bruce Mehler, and Pnina Gershon. The study, which surveys consumer attitudes toward autonomous vehicles, reflects a growing interest in the technology — but consumer perceptions are only part of the complex equation that AVT researchers are working to solve.

“The modest increase in AV readiness is encouraging,” Reimer notes. “But building lasting trust requires us to go deeper, examining how drivers interact with these systems in practice. Trust isn’t built on interest alone; it’s about creating a reliable and understandable user experience that people feel safe engaging with over time. Trust can be eroded quickly.”

Building a data-driven understanding of driving behavior

The AVT Consortium’s approach involves gathering extensive real-world data on driver interactions across age groups, experience levels, and vehicles. These data form one of the largest datasets of its kind, enabling researchers to study system performance, driver behavior, and attitudes toward assistive and automated technologies. AVT research aims to compare and contrast the benefits of various manufacturers’ embodiments of technologies. The vision for AVT research is that identifying the most promising attributes of various manufactured systems makes it easier and faster for new designs to evolve from the power of the positive.

“The work of the AVT Consortium exemplifies MIT’s commitment to understanding the human side of technology,” says Yossi Sheffi, director of the CTL. “By diving deep into driver behavior and attitudes toward assisted and automated systems, the AVT Consortium is laying the groundwork for a future where these technologies are both trusted and widely adopted. This research is essential for creating a transportation landscape that is safe, efficient, and adaptable to real-world human needs.”

The AVT Consortium’s insights have proven valuable in helping to shape vehicle design to meet the needs of real-world drivers. By understanding how drivers respond to these technologies, the consortium’s work supports the development of AI systems that feel trustworthy and intuitive, addressing drivers’ concerns and fostering confidence in the technology.

“We’re not just interested in whether people are open to using assistive and automated vehicle technologies,” adds Reimer. “We’re digging into how they use these technologies, what challenges they encounter, and how we can improve system design to make these technologies safer and more intuitive for all drivers.”

An interdisciplinary approach to vehicle technology

The AVT Consortium is not just a research effort — it is a community that brings together academic researchers, industry partners, and consumer organizations. By working with stakeholders from across the automotive, technology, and insurance industries, the AVT team can explore the full range of challenges and opportunities presented by emerging vehicle technologies to ensure a comprehensive, practical, and multi-stakeholder approach in the rapidly evolving mobility landscape. The interdisciplinary framework is also crucial to understanding how AI-driven systems can support humans beyond the car.

“As vehicle technologies evolve, it’s crucial to understand how they intersect with the everyday experiences of drivers across all ages,” says Joe Coughlin, director of the MIT AgeLab. “The AVT Consortium’s approach, focusing on both data and human-centered insights, reflects a profound commitment to creating mobility systems that genuinely serve people. The AgeLab is proud to support this work, which is instrumental in making future vehicle systems intuitive, safe, and empowering for everyone.”

“The future of mobility relies on our ability to build systems that drivers can trust and feel comfortable using,” says Reimer. “Our mission at AVT is not only to develop a data-driven understanding of how drivers across the lifespan use and respond to various vehicle technologies, but also to provide actionable insights into consumer attitudes to enhance safety and usability.”

Shaping the future of mobility

As assistive and automated vehicles become more common on our roads, the work of the AVT Consortium will continue to play a critical role in shaping the future of transportation. By prioritizing data-driven insights and human-centered design, the AVT Consortium is helping to lay the foundation for a safer, smarter, and more trusted mobility future.

MIT CTL is a world leader in supply chain management research and education, with over 50 years of expertise. The center’s work spans industry partnerships, cutting-edge research, and the advancement of sustainable supply chain practices.

Consortium led by MIT, Harvard University, and Mass General Brigham spurs development of 408 MW of renewable energy

MIT is co-leading an effort to enable the development of two new large-scale renewable energy projects in regions with carbon-intensive electrical grids: Big Elm Solar in Bell County, Texas, came online this year, and the Bowman Wind Project in Bowman County, North Dakota, is expected to be operational in 2026. Together, they will add a combined 408 megawatts (MW) of new renewable energy capacity to the power grid. This work is a critical part of MIT’s strategy to achieve its goal of net-zero carbon emissions by 2026.

The Consortium for Climate Solutions, which includes MIT and 10 other Massachusetts organizations, seeks to eliminate close to 1 million metric tons of greenhouse gases each year — more than five times the annual direct emissions from MIT’s campus — by committing to purchase an estimated 1.3-million-megawatt hours of new solar and wind electricity generation annually.

“MIT has mobilized on multiple fronts to expedite solutions to climate change,” says Glen Shor, executive vice president and treasurer. “Catalyzing these large-scale renewable projects is an important part of our comprehensive efforts to reduce carbon emissions from generating energy. We are pleased to work in partnership with other local enterprises and organizations to amplify the impact we could achieve individually.”

The two new projects complement MIT’s existing 25-year power purchase agreement established with Summit Farms in 2016, which enabled the construction of a roughly 650-acre, 60 MW solar farm on farmland in North Carolina, leading to the early retirement of a coal-fired plant nearby. Its success has inspired other institutions to implement similar aggregation models.

A collective approach to enable global impact

MIT, Harvard University, and Mass General Brigham formed the consortium in 2020 to provide a structure to accelerate global emissions reductions through the development of large-scale renewable energy projects — accelerating and expanding the impact of each institution’s greenhouse gas reduction initiatives. As the project’s anchors, they collectively procured the largest volume of energy through the aggregation.  

The consortium engaged with PowerOptions, a nonprofit energy-buying consortium, which offered its members the opportunity to participate in the projects. The City of Cambridge, Beth Israel Lahey, Boston Children’s Hospital, Dana-Farber Cancer Institute, Tufts University, the Mass Convention Center Authority, the Museum of Fine Arts, and GBH later joined the consortium through PowerOptions. 
 
The consortium vetted over 125 potential projects against its rigorous project evaluation criteria. With faculty and MIT stakeholder input on a short list of the highest-ranking projects, it ultimately chose Bowman Wind and Big Elm Solar. Collectively, these two projects will achieve large greenhouse gas emissions reductions in two of the most carbon-intensive electrical grid regions in the United States and create clean energy generation sources to reduce negative health impacts.

“Enabling these projects in regions where the grids are most carbon-intensive allows them to have the greatest impact. We anticipate these projects will prevent two times more emissions per unit of generated electricity than would a similar-scale project in New England,” explains Vice President for Campus Services and Stewardship Joe Higgins.

By all consortium institutions making significant 15-to-20-year financial commitments to buy electricity, the developer was able to obtain critical external project financing to build the projects. Owned and operated by Apex Clean Energy, the projects will add new renewable electricity to the grid equivalent to powering 130,000 households annually, displacing over 950,000 metric tons of greenhouse gas emissions each year from highly carbon-intensive power plants in the region. 

Complementary decarbonization work underway 

In addition to investing in offsite renewable energy projects, many consortium members have developed strategies to reduce and eliminate their own direct emissions. At MIT, accomplishing this requires transformative change in how energy is generated, distributed, and used on campus. Efforts underway include the installation of solar panels on campus rooftops that will increase renewable energy generation four-fold by 2026; continuing to transition our heat distribution infrastructure from steam-based to hot water-based; utilizing design and construction that minimizes emissions and increases energy efficiency; employing AI-enabled sensors to optimize temperature set points and reduce energy use in buildings; and converting MIT’s vehicle fleet to all-electric vehicles while adding more electric car charging stations.

The Institute has also upgraded the Central Utilities Plant, which uses advanced co-generation technology to produce power that is up to 20 percent less carbon-intensive than that from the regional power grid. MIT is charting the course toward a next-generation district energy system, with a comprehensive planning initiative to revolutionize its campus energy infrastructure. The effort is exploring leading-edge technology, including industrial-scale heat pumps, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen derived from renewable sources as solutions to achieve full decarbonization of campus operations by 2050.

“At MIT, we are focused on decarbonizing our own campus as well as the role we can play in solving climate at the largest of scales, including supporting a cleaner grid in line with the call to triple renewables globally by 2030. By enabling these large-scale renewable projects, we can have an immediate and significant impact of reducing emissions through the urgently needed decarbonization of regional power grids,” says Julie Newman, MIT’s director of sustainability.  

A vision for U.S. science success

White House science advisor Arati Prabhakar expressed confidence in U.S. science and technology capacities during a talk on Wednesday about major issues the country must tackle.

“Let me start with the purpose of science and technology and innovation, which is to open possibilities so that we can achieve our great aspirations,” said Prabhakar, who is the director of the Office of Science and Technology Policy (OSTP) and a co-chair of the President’s Council of Advisors on Science and Technology (PCAST). 

“The aspirations that we have as a country today are as great as they have ever been,” she added.

Much of Prabhakar’s talk focused on three major issues in science and technology development: cancer prevention, climate change, and AI. In the process, she also emphasized the necessity for the U.S. to sustain its global leadership in research across domains of science and technology, which she called “one of America’s long-time strengths.”

“Ever since the end of the Second World War, we said we’re going in on basic research, we’re going to build our universities’ capacity to do it, we have an unparalleled basic research capacity, and we should always have that,” said Prabhakar.

“We have gotten better, I think, in recent years at commercializing technology from our basic research,” Prabhakar added, noting, “Capital moves when you can see profit and growth.” The Biden administration, she said, has invested in a variety of new ways for the public and private sector to work together to massively accelerate the movement of technology into the market.

Wednesday’s talk drew a capacity audience of nearly 300 people in MIT’s Wong Auditorium and was hosted by the Manufacturing@MIT Working Group. The event included introductory remarks by Suzanne Berger, an Institute Professor and a longtime expert on the innovation economy, and Nergis Mavalvala, dean of the School of Science and an astrophysicist and leader in gravitational-wave detection.

Introducing Mavalvala, Berger said the 2015 announcement of the discovery of gravitational waves “was the day I felt proudest and most elated to be a member of the MIT community,” and noted that U.S. government support helped make the research possible. Mavalvala, in turn, said MIT was “especially honored” to hear Prabhakar discuss leading-edge research and acknowledge the role of universities in strengthening the country’s science and technology sectors.

Prabhakar has extensive experience in both government and the private sector. She has been OSTP director and co-chair of PCAST since October of 2022. She served as director of the Defense Advanced Research Projects Agency (DARPA) from 2012 to 2017 and director of the National Institute of Standards and Technology (NIST) from 1993 to 1997.

She has also held executive positions at Raychem and Interval Research, and spent a decade at the investment firm U.S. Venture Partners. An engineer by training, Prabhakar earned a BS in electrical engineering from Texas Tech University in 1979, an MA in electrical engineering from Caltech in 1980, and a PhD in applied physics from Caltech in 1984.

Among other remarks about medicine, Prabhakar touted the Biden administration’s “Cancer Moonshot” program, which aims to cut the cancer death rate in half over the next 25 years through multiple approaches, from better health care provision and cancer detection to limiting public exposure to carcinogens. We should be striving, Prabhakar said, for “a future in which people take good health for granted and can get on with their lives.”

On AI, she heralded both the promise and concerns about technology, saying, “I think it’s time for active steps to get on a path to where it actually allows people to do more and earn more.”

When it comes to climate change, Prabhakar said, “We all understand that the climate is going to change. But it’s in our hands how severe those changes get. And it’s possible that we can build a better future.” She noted the bipartisan infrastructure bill signed into law in 2021 and the Biden administration’s Inflation Reduction Act as important steps forward in this fight.

“Together those are making the single biggest investment anyone anywhere on the planet has ever made in the clean energy transition,” she said. “I used to feel hopeless about our ability to do that, and it gives me tremendous hope.”

After her talk, Prabhakar was joined onstage for a group discussion with the three co-presidents of the MIT Energy and Climate Club: Laurentiu Anton, a doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA candidate at the MIT Sloan School of Management; and Thomas Lee, a doctoral candidate in MIT’s Institute for Data, Systems, and Society.

Asked about the seemingly sagging public confidence in science today, Prabhakar offered a few thoughts.

“The first thing I would say is, don’t take it personally,” Prabhakar said, noting that any dip in public regard for science is less severe than the diminished public confidence in other institutions.

Adding some levity, she observed that in polling about which occupations are regarded as being desirable for a marriage partner to have, “scientist” still ranks highly.

“Scientists still do really well on that front, we’ve got that going for us,” she quipped.

More seriously, Prabhakar observed, rather than “preaching” at the public, scientists should recognize that “part of the job for us is to continue to be clear about what we know are the facts, and to present them clearly but humbly, and to be clear that we’re going to continue working to learn more.” At the same time, she continued, scientists can always reinforce that “oh, by the way, facts are helpful things that can actually help you make better choices about how the future turns out. I think that would be better in my view.”

Prabhakar said that her White House work had been guided, in part, by one of the overarching themes that President Biden has often reinforced.

“He thinks about America as a nation that can be described in a single word, and that word is ‘possibilities,’” she said. “And that idea, that is such a big idea, it lights me up. I think of what we do in the world of science and technology and innovation as really part and parcel of creating those possibilities.”

Ultimately, Prabhakar said, at all times and all points in American history, scientists and technologists must continue “to prove once more that when people come together and do this work … we do it in a way that builds opportunity and expands opportunity for everyone in our country. I think this is the great privilege we all have in the work we do, and it’s also our responsibility.”

Catherine Wolfram: High-energy scholar

In the mid 2000s, Catherine Wolfram PhD ’96 reached what she calls “an inflection point” in her career. After about a decade of studying U.S. electricity markets, she had come to recognize that “you couldn’t study the energy industries without thinking about climate mitigation,” as she puts it.

At the same time, Wolfram understood that the trajectory of energy use in the developing world was a massively important part of the climate picture. To get a comprehensive grasp on global dynamics, she says, “I realized I needed to start thinking about the rest of the world.”

An accomplished scholar and policy expert, Wolfram has been on the faculty at Harvard University, the University of California at Berkeley — and now MIT, where she is the William Barton Rogers Professor in Energy. She has also served as deputy assistant secretary for climate and energy economics at the U.S. Treasury.

Yet even leading experts want to keep learning. So, when she hit that inflection point, Wolfram started carving out a new phase of her research career.

“One of the things I love about being an academic is, I could just decide to do that,” Wolfram says. “I didn’t need to check with a boss. I could just pivot my career to being more focused to thinking about energy in the developing world.”

Over the last decade, Wolfram has published a wide array of original studies about energy consumption in the developing world. From Kenya to Mexico to South Asia, she has shed light on the dynamics of economics growth and energy consumption — while spending some of that time serving the government too. Last year, Wolfram joined the faculty of the MIT Sloan School of Management, where her work bolsters the Institute’s growing effort to combat climate change.

Studying at MIT

Wolfram largely grew up in Minnesota, where her father was a legal scholar, although he moved to Cornell University around the time she started high school. As an undergraduate, she majored in economics at Harvard University, and after graduation she worked first for a consultant, then for the Massachusetts Department of Public Utilities, the agency regulating energy rates. 

In the latter job, Wolfram kept noticing that people were often citing the research of an MIT scholar named Paul Joskow (who is now the Elizabeth and James Killian Professor of Economics Emeritus in MIT’s Department of Economics) and Richard Schmalensee (a former dean of the MIT Sloan School of Management and now the Howard W. Johnson Professor of Management Emeritus). Seeing how consequential economics research could be for policymaking, Wolfram decided to get a PhD in the field and was accepted into MIT’s doctoral program.

“I went into graduate school with an unusually specific view of what I wanted to do,” Wolfram says. “I wanted to work with Paul Joskow and Dick Schmalensee on electricity markets, and that’s how I wound up here.”

At MIT, Wolfram also ended up working extensively with Nancy Rose, the Charles P. Kindleberger Professor of Applied Economics and a former head of the Department of Economics, who helped oversee Wolfram’s thesis; Rose has extensively studied market regulation as well.

Wolfram’s dissertation research largely focused on price-setting behavior in the U.K.’s newly deregulated electricity markets, which, it turned out, applied handily to the U.S., where a similar process was taking place. “I was fortunate because this was around the time California was thinking about restructuring, as it was known,” Wolfram says. She spent four years on the faculty at Harvard, then moved to UC Berkeley. Wolfram’s studies have shown that deregulation has had some medium-term benefits, for instance in making power plants operate more efficiently.

Turning on the AC

By around 2010, though, Wolfram began shifting her scholarly focus in earnest, conducting innovative studies about energy in the developing world. One strand of her research has centered on Kenya, to better understand how more energy access for people without electricity might fit into growth in the developing world.

In this case, Wolfram’s perhaps surprising conclusion is that electrification itself is not a magic ticket to prosperity; people without electricity are more eager to adopt it when they have a practical economic need for it. Meanwhile, they have other essential needs that are not necessarily being addressed.

“The 800 million people in the world who don’t have electricity also don’t have access to good health care or running water,” Wolfram says. “Giving them better housing infrastructure is important, and harder to tackle. It’s not clear that bringing people electricity alone is the single most useful thing from a development perspective. Although electricity is a super-important component of modern living.”

Wolfram has even delved into topics such as air conditioner use in the developing world — an important driver of energy use. As her research shows, many countries, with a combined population far bigger than the U.S., are among the fastest-growing adopters of air conditioners and have an even greater need for them, based on their climates. Adoption of air conditioning within those countries also is characterized by marked economic inequality.

From early 2021 until late 2022, Wolfram also served in the administration of President Joe Biden, where her work also centered on global energy issues. Among other things, Wolfram was part of the team working out a price-cap policy for Russian oil exports, a concept that she thinks could be applied to many other products globally. Although, she notes, working with countries heavily dependent on exporting energy materials will always require careful engagement.

“We need to be mindful of that dependence and importance as we go through this massive effort to decarbonize the energy sector and shift it to a whole new paradigm,” Wolfram says.

At MIT again

Still, she notes, the world does need a whole new energy paradigm, and fast. Her arrival at MIT overlaps with the emergence of a new Institute-wide effort, the Climate Project at MIT, that aims to accelerate and scale climate solutions and good climate policy, including through the new Climate Policy Center at MIT Sloan. That kind of effort, Wolfram says, matters to her.

“It’s part of why I’ve come to MIT,” Wolfram says. “Technology will be one part of the climate solution, but I do think an innovative mindset, how can we think about doing things better, can be productively applied to climate policy.” On being at MIT, she adds: “It’s great, it’s awesome. One of the things that pleasantly surprised me is how tight-knit and friendly the MIT faculty all are, and how many interactions I’ve had with people from other departments.”

Wolfram has also been enjoying her teaching at MIT, and will be offering a large class in spring 2025, 15.016 (Climate and Energy in the Global Economy), that she debuted this past academic year.

“It’s super fun to have students from around the world, who have personal stories and knowledge of energy systems in their countries and can contribute to our discussions,” she says.

When it comes to tackling climate change, many things seem daunting. But there is still a world of knowledge to be acquired while we try to keep the planet from overheating, and Wolfram has a can-do attitude about learning more and applying those lessons.

“We’ve made a lot of progress,” Wolfram says. “But we still have a lot more to do.”