New general law governs fracture energy of networks across materials and length scales

New general law governs fracture energy of networks across materials and length scales

Materials like car tires, human tissues, and spider webs are diverse in composition, but all contain networks of interconnected strands. A long-standing question about the durability of these materials asks: What is the energy required to fracture these diverse networks? A recently published paper by MIT researchers offers new insights.

“Our findings reveal a simple, general law that governs the fracture energy of networks across various materials and length scales,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor and professor of mechanical engineering and civil and environmental engineering at MIT. “This discovery has significant implications for the design of new materials, structures, and metamaterials, allowing for the creation of systems that are incredibly tough, soft, and stretchable.”

Despite an established understanding of the importance of failure resistance in design of such networks, no existing physical model effectively linked strand mechanics and connectivity to predict bulk fracture — until now. This new research reveals a universal scaling law that bridges length scales and makes it possible to predict the intrinsic fracture energy of diverse networks.

“This theory helps us predict how much energy it takes to break these networks by advancing a crack,” says graduate student Chase Hartquist, one of the paper’s lead authors. “It turns out that you can design tougher versions of these materials by making the strands longer, more stretchable, or resistant to higher forces before breaking.”

To validate their results, the team 3D-printed a giant, stretchable network, allowing them to demonstrate fracture properties in practice. They found that despite the differences in the networks, they all followed a simple and predictable rule. Beyond the changes to the strands themselves, a network can also be toughened by connecting the strands into larger loops.

“By adjusting these properties, car tires could last longer, tissues could better resist injury, and spider webs could become more durable,” says Hartquist.

Shu Wang, a postdoc in Zhao’s lab and fellow lead author of the paper, called the research findings “an extremely fulfilling moment … it meant that the same rules could be applied to describe a wide variety of materials, making it easier to design the best material for a given situation.”

The researchers explain that this work represents progress in an exciting and emerging field called “architected materials,” where the structure within the material itself gives it unique properties. They say the discovery sheds light on how to make these materials even tougher, by focusing on designing the segments within the architecture stronger and more stretchable. The strategy is adaptable for materials across fields and can be applied to improve durability of soft robotic actuators, enhance the toughness of engineered tissues, or even create resilient lattices for aerospace technology.

Their open-access paper, “Scaling Law for Intrinsic Fracture Energy of Diverse Stretchable Networks,” is available now in Physical Review X, a leading journal in interdisciplinary physics.

“Forever grateful for MIT Open Learning for making knowledge accessible and fostering a network of curious minds”

“Forever grateful for MIT Open Learning for making knowledge accessible and fostering a network of curious minds”

Bia Adams, a London-based neuropsychologist, former professional ballet dancer, and MIT Open Learning learner, has built her career across decades of diverse, interconnected experiences and an emphasis on lifelong learning. She earned her bachelor’s degree in clinical and behavioral psychology, and then worked as a psychologist and therapist for several years before taking a sabbatical in her late 20s to study at the London Contemporary Dance School and The Royal Ballet — fulfilling a long-time dream.

“In hindsight, I think what drew me most to ballet was not so much the form itself,” says Adams, “but more of a subconscious desire to make sense of my body moving through space and time, my emotions and motivations — all within a discipline that is rigorous, meticulous, and routine-based. It’s an endeavor to make sense of the world and myself.”

After acquiring some dance-related injuries, Adams returned to psychology. She completed an online certificate program specializing in medical neuroscience via Duke University, focusing on how pathology arises out of the way the brain computes information and generates behavior.

In addition to her clinical practice, she has also worked at a data science and AI consultancy for neural network research.

In 2022, in search of new things to learn and apply to both her work and personal life, Adams discovered MIT OpenCourseWare within MIT Open Learning. She was drawn to class 8.04 (Quantum Physics I), which specifically focuses on quantum mechanics, as she was hoping to finally gain some understanding of complex topics that she had tried to teach herself in the past with limited success. She credits the course’s lectures, taught by Allan Adams (physicist and principal investigator of the MIT Future Ocean Lab), with finally making these challenging topics approachable.

“I still talk to my friends at length about exciting moments in these lectures,” says Adams. “After the first class, I was hooked.”

Adams’s journey through MIT Open Learning’s educational resources quickly led to a deeper interest in computational neuroscience. She learned how to use tools from mathematics and computer science to better understand the brain, nervous system, and behavior.

She says she gained many new insights from class 6.034 (Artificial Intelligence), particularly in watching the late Professor Patrick Winston’s lectures. She appreciated learning more about the cognitive psychology aspect of AI, including how pioneers in the field looked at how the brain processes information and aimed to build programs that could solve problems. She further enhanced her understanding of AI with the Minds and Machines course on MITx Online, part of Open Learning.

Adams is now in the process of completing Introduction to Computer Science and Programming Using Python, taught by John Guttag; Eric Grimson, former interim vice president for Open Learning; and Ana Bell.

“I am multilingual, and I think the way my brain processes code is similar to the way computers code,” says Adams. “I find learning to code similar to learning a foreign language: both exhilarating and intimidating. Learning the rules, deciphering the syntax, and building my own world through code is one of the most fascinating challenges of my life.”

Adams is also pursuing a master’s degree at Duke and the University College of London, focusing on the neurobiology of sleep and looking particularly at how the biochemistry of the brain can affect this critical function. As a complement to this research, she is currently exploring class 9.40 (Introduction to Neural Computation), taught by Michale Fee and Daniel Zysman, which introduces quantitative approaches to understanding brain and cognitive functions and neurons and covers foundational quantitative tools of data analysis in neuroscience.

In addition to the courses related more directly to her field, MIT Open Learning also provided Adams an opportunity to explore other academic areas. She delved into philosophy for the first time, taking Paradox and Infinity, taught by Professor Agustín Rayo, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, and Digital Learning Lab Fellow David Balcarras, which looks at the intersection of philosophy and mathematics. She also was able to explore in more depth immunology, which had always been of great interest to her, through Professor Adam Martin’s lectures on this topic in class 7.016 (Introductory Biology).

“I am forever grateful for MIT Open Learning,” says Adams, “for making knowledge accessible and fostering a network of curious minds, all striving to share, expand, and apply this knowledge for the greater good.”

For MIT-WHOI Joint Program student Faith Brooks, the sky’s the limit

For MIT-WHOI Joint Program student Faith Brooks, the sky’s the limit

Faith Brooks, a graduate student in the MIT-WHOI Joint Program, has had a clear dream since the age of 4: to become a pilot.

“At around 8 years old, my neighbor knew I wanted to fly and showed me pictures of her dad landing a jet on an aircraft carrier, and I was immediately captivated,” says Brooks. Further inspired by her grandfather’s experience in the U.S. Navy (USN), and owing to a lifelong fascination with aviation, she knew nothing would stand in her way.

Brooks explored several different paths to becoming a pilot, but she says one conversation with her longtime mentor, Capt. Matt Skone, USN (Ret.), changed the trajectory of her life.

“He asked if I had heard of the Naval Academy,” she recalls. “At the time, I hadn’t … I immediately knew that that was where I wanted to go, and everything else I learned about United States Naval Academy (USNA) reinforced that for me.”

In her “firstie” (senior) year at the USNA, Brooks was selected to go to Pensacola, Florida, and train to become a naval pilot as a student naval aviator, taking her one step closer to her dream. The USNA also helped guide her path to MIT. Her journey to joining the MIT-WHOI Joint Program began with the USNA’s professional knowledge curriculum, where she read about retired Capt. Wendy Lawrence SM ’88, a naval aviator and astronaut.

“Reading her bio prompted me to look into the program, and it sounded like the perfect program for me — where else could you get a better education in ocean engineering than MIT and Woods Hole Oceanographic Institution [WHOI]?”

In the MIT-WHOI Joint Program, Brooks is researching the impact of coastal pond breaching on preventing and mitigating harmful algal blooms. Her work focuses on the biannual mechanical breaching of Nantucket’s Sesachacha Pond to the ocean and the resultant impact on the pond’s water quality. This practice aims to improve water quality and mitigate harmful algal blooms (HABs), especially in summer.

Breaching in coastal ponds is a process that was initially used to enhance salinity for herring and shellfish habitats, but has since shifted to address water quality concerns. Traditionally, an excavator creates a breach in the pond, which naturally closes within one to five days, influenced by sediment transport and weather conditions. High winds and waves can accelerate sediment movement, limiting ocean water exchange and potentially increasing eutrophication, where excessive nutrients lead to dense plant growth and depletion of oxygen. In brackish water environments, harmful algal blooms are often driven by elevated nitrogen levels and higher temperatures, with higher nitrogen concentrating leading to more frequent and severe blooms as temperatures rise.

The Nantucket Natural Resources Department (NRD) has been collaborating with local homeowners to investigate the pond breaching process. Existing data are mainly anecdotal evidence and NRD’s monthly sampling since 2022, which has not shown the expected decrease in eutrophication. Brooks’ research will focus on data before, during, and after the breach at two pond sites to assess water changes to evaluate its effectiveness in improving water quality.

When Brooks isn’t knee-deep in the waters of the Sesachacha or training with her MIT Triathlon team, she takes additional opportunities to further her education. Last year, Brooks participated in the MIT-Portugal Marine Robotics Summer School in Faial, Azores, in Portugal, and immersed herself in a combination of a hands-on design projects and lectures on a variety of topics related to oceanography, engineering, and marine robotics.

“My favorite part of the program was how interdisciplinary it was. We had a combination of mechanical engineers, electrical engineers, computer scientists, marine biologists, and oceanographers, and we had teams that included each of these specialties,” she says. “Our project involved designing a lander equipped with an underwater camera connected to a surface buoy that would transmit the footage. Having worked in mostly just engineering teams previously, it was a great experience to work with a more diverse group and I gained a much better understanding of how to design instruments and systems in accordance with what the marine biologists need.”

Brooks also earned her Part 107 Small Unmanned Aircraft System (UAS) license to operate the lab’s drone with a multispectral camera for her upcoming fieldwork. When she graduates from the MIT-WHOI Joint Program next September, she’ll report to the Naval Aviation Schools Command in Pensacola, Florida, to begin flight training.

While she says she’ll miss Boston’s charm and history, as well as the Shining Sea Bikeway on crisp fall days in Woods Hole, Brooks is looking forward to putting her uniform back on, and starting her naval career and flight school. The time Brooks has spent at MIT will support her in these future endeavors. She advises others interested in a similar path to focus on research within their areas of interest.

“The biggest lesson that I’ve learned from both research theses is that any research project will change over time, and it’s often a good idea to take a step back and look at how your work fits into the larger picture,” she says. “I couldn’t recommend doing research more; it’s such a great opportunity to dig into something that you’re interested in, and is also very fulfilling.” 

Bryan Reimer named to FAA Rulemaking Committee

Bryan Reimer named to FAA Rulemaking Committee

Bryan Reimer, a research scientist at the MIT Center for Transportation and Logistics (CTL), and the founder and co-leader of the Advanced Vehicle Technology Consortium and the Human Factors Evaluator for Automotive Demand Consortium in the MIT AgeLab, has been appointed to the Task Force on Human Factors in Aviation Safety Aviation Rulemaking Committee (HF Task Force ARC). The HF Task Force ARC will provide recommendations to the U.S. Federal Aviation Administration (FAA) on the most significant human factors and the relative contribution of these factors to aviation safety risk.

Reimer, who has worked at MIT since 2003, joins a committee whose operational or academic expertise includes air carrier operations, air traffic control, pilot experience, aeronautical information, aircraft maintenance and mechanics psychology, human-machine integration, and general aviation operations. Their recommendations to the FAA will help ensure safety for passengers, aircraft crews, and cargo for years to come. His appointment follows a year of serving on the Transforming Transportation Advisory Committee (TTAC) for the U.S. Department of Transportation, where he has taken on the role of vice chair on the Artificial Intelligence subcommittee. The TTAC recently released a report to the Secretary of Transportation in response to its charter.

As a mobility and technology futurist working at the intersection of technology, human behavior, and public policy, Reimer brings his expertise in human-machine integration, transportation safety, and AI to the committee. The committee, chartered by congressional mandate through the bipartisan FAA Reauthorization Act of 2024, specifically calls for a portion of the committee to have expertise on human factors but whose experience and training are not primarily in aviation, which Reimer will provide.

MIT CTL creates supply chain innovation and drives it into practice through the three pillars of research, outreach, and education, working with businesses, government, and nongovernmental organizations. As a longtime advocate of collaboration across public and private sectors to ensure consumers’ safety in transportation, Reimer’s particular expertise will help the FAA more broadly consider the human element of aviation safety. Yossi Sheffi, director of MIT CTL, says, “Aviation plays a critical role in the rapid and reliable transportation of goods across vast distances, making it essential for delivering time-sensitive products globally. We must understand the current human factors involved in this process to help ensure smooth operation of this indispensable service amid potential disruptions.”

Reimer recently discussed his research on an episode of The Ojo-Yoshida Report with Phil Koopman, a professor of electrical and computer engineering.

HF Task Force ARC members will serve a two-year term. The first ARC plenary meeting was held Jan. 15-16 in Washington.

Toward sustainable decarbonization of aviation in Latin America

Toward sustainable decarbonization of aviation in Latin America

According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?

To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.

Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.

Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.

SAF emissions, costs, and sources

Under an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.

As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.

Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.

“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.

Recommendations

The researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.

They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.

Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.

“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”

Financial support for this study was provided by LATAM Airlines and Airbus.

The multifaceted challenge of powering AI

The multifaceted challenge of powering AI

Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.

And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.

The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.

Unexpected demand brings unexpected solutions

Several companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.

There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.

An unprecedented surge in the demand for electricity

“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”

Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.

Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.

Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.

Finding clean electricity sources

To further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.

Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.

As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.

Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”

In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.

Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.

Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.

Reducing electricity demand

Other approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.

Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”

That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.

What about the neighbors?

A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.

There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.

MIT’s contributions

At MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.

MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.

In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.

As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system.

Student Program for Innovation in Science and Engineering is a launching pad toward possibility

When you ask MIT students to tell you the story of how they came to Cambridge, you might hear some common themes: a favorite science teacher; an interest in computers that turned into an obsession; a bedroom decorated with NASA posters and glow-in-the-dark stars.

But for a few, the road to MIT starts with an invitation to a special summer program: not a camp with canoes or cabins or campgrounds, but instead one taking place in classrooms and labs with discussions of Arduinos, variable scope and aliasing, and Michaelis-Menten enzyme kinetics. The classroom and labs are in Barbados at the Cave Hill campus of the University of the West Indies, and all the students are gifted Caribbean high schoolers, ages 16-18, who’ve been selected for the extremely competitive Student Program for Innovation in Science and Engineering (SPISE). Their summer will not include much time for leisure or lots of sleep; instead, they’ll be tackling a five-week high-intensity curriculum with courses in university-level calculus, physics, biochemistry, computer programming, electronics and entrepreneurship, including hands-on projects in the last three. For several students currently on campus, SPISE was their gateway to MIT.

“The full story is even bigger,” says Cardinal Warde, MIT professor of electrical engineering and founder of SPISE, who is originally from Barbados in the Caribbean. “Over the past 10 years, exactly 30 of the 245 students in total from the SPISE program have attended MIT as undergrads and/or graduate students.”

While many SPISE alumni have gone on to Harvard University, Stanford University, Caltech, Princeton University, Columbia University, the University of Pennsylvania, and other prestigious schools, the emphasis on science and technology creates a natural pipeline to MIT, whose faculty and instructors volunteered their time and expertise to help Warde design a curriculum that was both challenging and engaging.

Jacob White, the Cecil H. Green Professor in Electrical Engineering, was one of the first of those volunteers. “When Covid forced SPISE to run remotely, Professor Warde felt it was critical to continue having hands-on engineering labs, and sought my help,” White explains. “Kits were cobbled together using EECS-donated microcontroller boards, motors and magnets; Dinah Sah (the SPISE director) got those kits to students spread over half-a-dozen islands.” White, and several of his graduate students, collaborated to write a curriculum that would give the students enough grounding in fundamentals to empower them to create their own designs.

Student Program for Innovation in Science and Engineering is a launching pad toward possibility

Play video

In 2021, students worked from home due to the Covid-19 pandemic. The rigor of SPISE projects, however, remained high, thanks to the curriculum contributions of EECS Professor Jacob White, among others. Here, students show off their maglev projects.
Video: Department of Electrical Engineering and Computer Science

When SPISE returned to in-person education, Steve Leeb, the Emanuel E. Landsman (1958) Professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE), was inspired by the challenge of teaching electronics remotely.

“SPISE is exactly the kind of opportunity we’re looking for in the RLE educational outreach programs: bright, enthusiastic young folks who would benefit from new perspectives on science and engineering — a community of folks where we can bring new perspectives, share energy and excitement, and, ideally, make lifelong connections to our academic programs here at MIT. It’s a natural fit that benefits us all,” says Leeb, who, together with his graduate students, adapted the portable “take-home” Electronics FIRST curriculum pioneered at MIT and taught in course 6.2030. “The Electronics FIRST exercises and lectures are designed to connect electronic circuit techniques — digital gates, microcontrollers, and other electronics technologies — that are recognizable as elements of commercial products,” says Leeb. “So the projects naturally engage students in building with components that have a connection to commercial products and product ideas. This flows naturally into a ‘final project’ that the students create in SPISE, a product of their own conception, for example a music synthesizer.”

Crucially, the curriculum isn’t simplified for the high school students. “We adapted the projects to fit the different program length — SPISE is shorter than a full MIT term,” says Leeb. “We did not reduce the rigor or challenge of the activities, and, in fact, have brought new ideas from the SPISE students back to campus to improve 6.2030.”

Departments beyond EECS pitched in to develop SPISE, with major teaching contributions coming from the Department of Physics, where Lecturer Alex Shvonski, Senior Technical Instructor Caleb Bonyun, and Senior Technical Instructor Joshua Wolfe, who also manages the Physics Instructional Resource Lab, collaborated on developing hands-on projects and on the teaching for both Physics I and Calculus I courses. Additional supplies came from the MIT Sea Grant Program, which supplied underwater robots to SPISE for six consecutive years before the Covid-19 pandemic. (In the wake of the pandemic, the program pivoted to focus on embedded systems.)

But the core inspiration for SPISE doesn’t come from an academic department at all. “SPISE was based on a model that’s proven to work: MITES,” explains Ebony Hearn, executive director of the MIT Introduction to Technology, Engineering, and Science. “The program, which offers access and opportunity to intensive courses in science, technology, engineering, and math for talented high school students in every zip code, has helped thousands of students for nearly 50 years gain admission to top universities and pursue successful careers in STEM while being immersed in a community of caring mentors and leaders in the profession.”

The shared DNA of the two programs is no coincidence. Cardinal Warde has been the faculty director of MITES for the past 27 years, and took the lessons of five decades of the transformative pre-college experience into account when envisioning an equivalent program in the Caribbean. Much like MITES, SPISE encourages its participants to develop a sense of belonging in STEM and to picture the possibilities at top schools; over the years, the program has added sessions with admissions officers from MIT, Columbia, Princeton, and U Penn. “SPISE changed my perspective of myself,” says Chenise Harper, a first-year student at MIT who is currently interested in Course 6-5 (Electrical Engineering With Computing). “It gave me the confidence to apply to universities I thought were completely out of my reach.”

Harper’s trajectory is exactly what the designers of the program hoped for. “We have been very successful with the shorter-term goal of increasing the numbers of Caribbean students pursuing advanced degrees in STEM and grooming the next generation of STEM and business leaders in the Region,” says Dinah Sah ’81, director of the program (and wife of Cardinal Warde). “We have SPISE graduates who have, or are currently pursuing, graduate degrees at the top universities around the world, including (but not limited to) MIT, Stanford, Harvard, Princeton, Dartmouth, Yale, Johns Hopkins, Carnegie Mellon, and Oxford, including a Rhodes Scholar. We fully believe that SPISE graduates represent part of the next generation of STEM and business leaders in the Caribbean and that SPISE has played a significant role in their trajectories.”

Notably, the SPISE program also includes an element of entrepreneurship, encouraging students to envision tech-based solutions to problems in their own backyards. Keonna Simon, who hails from St. Vincent and the Grenadines, developed a business pitch with other SPISE participants for an innovative “reverse vending machine.” “In the Caribbean, tourism is a key contributor to the economy, but littering is an issue that detracts from the beauty of our islands and harms our abundant marine life,” explains Simon, now a junior majoring in Course 6-7 (Computer Science and Molecular Biology). “Our project aimed to tackle this by placing reverse vending machines in heavily polluted areas. People could deposit recyclable plastic bottles, and the machine would convert the weight of the plastic into cash rewards on a card, redeemable for discounts at supermarkets.”

One SPISE alum, Quilee Simeon, decided to work on a renewable energy system at SPISE as a way of addressing global warming’s effects on his homeland of St. Lucia. “I chose to work on the renewable energy project, where we designed and built a prototype wind turbine using low-resource materials like PVC pipes. It was exciting because I thought it had real applications to developing island states like ours, where we don’t have an abundance of the manufacturing materials used in larger countries, and we are disproportionately affected by climate change,” says Simeon. “So building cheap and effective renewable energy resources was, in my view, an important problem to tackle.”

As Simeon worked on his prototype turbine and tackled late nights with his new classmates at SPISE, he realized how different the experience was from his prior schooling. For most students, the summer program is a first time away from home — but for all, it is the first exposure to the firehose-like experience of tackling multiple college-level courses with simultaneous assignments and problem sets. “It was honestly a primer to MIT,” says Simeon. “They not only challenged us with rigorous math and science, but also provided guidance on college applications and explained the vast opportunities a STEM degree could unlock. SPISE changed my view of myself as a scholar, though probably in an unexpected way. I thought I was smart before attending SPISE, but I realized how much I didn’t know and how many things were lacking or wrong with the style of education I had grown used to (rote learning, memorization, etc.). SPISE made me realize that being a scholar isn’t just about consuming knowledge — it’s about creating and applying it.”

The difficulty of the SPISE curriculum is a deliberate choice, made to aid students in preparing for higher education, confirms Sah. “When we started SPISE in 2012, [we decided] to focus on teaching the fundamentals in each of the courses … The homework problems and the quizzes would require the application of these fundamentals to solving challenging problems. This is in distinct contrast to rote memorization of facts, which is the method of learning these students had generally been exposed to. So, yes, this was in fact a very deliberate choice, and a critical change that we wanted to bring to these very high-potential students in their approach to learning and thinking.”

MIT’s emphasis on creative, outside-the-box thinking was just the beginning of the culture shocks that awaited SPISE students who made the transition to an American university from the summer program. Many are surprised by the American students’ habit of referring to their professors by first name, which would be considered disrespectful at home. Conversely, small daily interactions in the Northeast can feel remote and chilly to Caribbean students. “Moving from a small island with just around 100,000 people to Harvard was initially jarring,” says Gerard Porter, who participated in SPISE in 2017 before attending Harvard for his undergraduate degree. “In my first year, I was often met with puzzled stares when I greeted strangers in an elevator or students in my dorm whom I did not know personally. I quickly learned that politeness meant something very different in the Northeastern United States compared to the warm Caribbean.”

Other SPISE alumni report experiencing similar chilliness — literally. Quilee Simeon’s first winter in Cambridge was jarring. “I knew about the concept of winter and was told to expect cold weather, but I never actually knew how cold ‘cold’ was until I felt it myself,” says Simeon. “That was terrible!” Ronaldo Lee, a first-year from Jamaica interested in computer science and electrical engineering, found warmth among fellow SPISE alumni here at MIT. “Nothing beats the tropical climate! But honestly, the community at MIT has been amazing. I was surprised by how quickly I felt comfortable, thanks to the incredible people around me. The Black and Caribbean community especially made me feel at home; I’ve met some truly fascinating, driven, and like-minded people who’ve become close friends. One of the biggest surprises was discovering how similar we all are, despite our different cultural backgrounds. Everyone here is incredibly smart and shares a common drive to make the world a better place and pursue exciting STEM projects.”

The common drive to improve the world through STEM is evident in the paths the SPISE alumni have taken.

Gerard Porter, now a graduate student in the Kiessling Group within the Department of Chemistry at MIT, conducts research “focusing on unraveling the biological roles of glycans that cover all cells on Earth. I work on developing chemical tools to study critical regions of the bacterial cell wall that have been relatively unexplored.” Porter hopes that learning more about the molecular mechanisms at play within cell walls will open the doorway to the development of novel antibiotics.

Quilee Simeon has discovered an affinity for computational neuroscience, and is currently developing a computational model of the C. elegans nervous system. “My hope is that this model organism will prove fruitful for computational neuroscience research as it has for biology,” says Simeon, who plans to work in industry after graduation.

Computational biology has also captured the attention of junior Keonna Simon, who is excited to take courses such as 6.8711 (Computational Systems Biology: Deep Learning in the Life Sciences), saying, “This nexus holds a lot of potential for solving complex biological problems through computational methods, and I’m eager to dive deeper into that space!”

Chenise Harper found SPISE’s emphasis on bringing tech entrepreneurship home inspiring. “Living in the Caribbean has stimulated a dream of a future where robots are partners in rebuilding our community after natural disasters,” she says. “There are also so many issues that I would like to one day contribute to, like climate change issues and even cybersecurity. Electrical Engineering with Computing is the kind of major that will allow me to at least touch on the areas I am interested in, and allow me to explore both software and hardware concepts that excite me and will inspire me to develop a concrete way to give back to the community that has lifted me up to where I am now.”

Ronaldo Lee also found his academic home in computer science and electrical engineering, fabricating and characterizing perovskite solar cells in his Undergraduate Research Opportunities Program project and building a small offshore wind turbine for the Collegiate Wind Competition as part of the MIT WIND team. “I’d love to focus on the energy sector, particularly in improving the grid system and integrating renewable energy sources to ensure more reliable access,” says Lee. “I want to help make energy access more sustainable and inclusive, driving development for the region as a whole.”

Lee’s plans are perfectly in line with the long-term goals set by Warde and Sah as they planned SPISE. “Diversifying the economies of the region and raising the standard of living by stimulating more technology-based entrepreneurship will take time,” says Sah. “We are optimistic that our SPISE graduates will, with time, change the world to make it a better place for all, including the Caribbean.”

For clean ammonia, MIT engineers propose going underground

For clean ammonia, MIT engineers propose going underground

Ammonia is the most widely produced chemical in the world today, used primarily as a source for nitrogen fertilizer. Its production is also a major source of greenhouse gas emissions — the highest in the whole chemical industry.

Now, a team of researchers at MIT has developed an innovative way of making ammonia without the usual fossil-fuel-powered chemical plants that require high heat and pressure. Instead, they have found a way to use the Earth itself as a geochemical reactor, producing ammonia underground. The processes uses Earth’s naturally occurring heat and pressure, provided free of charge and free of emissions, as well as the reactivity of minerals already present in the ground.

The trick the team devised is to inject water underground, into an area of iron-rich subsurface rock. The water carries with it a source of nitrogen and particles of a metal catalyst, allowing the water to react with the iron to generate clean hydrogen, which in turn reacts with the nitrogen to make ammonia. A second well is then used to pump that ammonia up to the surface.

The process, which has been demonstrated in the lab but not yet in a natural setting, is described today in the journal Joule. The paper’s co-authors are MIT professors of materials science and engineering Iwnetim Abate and Ju Li, graduate student Yifan Gao, and five others at MIT.

“When I first produced ammonia from rock in the lab, I was so excited,” Gao recalls. “I realized this represented an entirely new and never-reported approach to ammonia synthesis.’”

The standard method for making ammonia is called the Haber-Bosch process, which was developed in Germany in the early 20th century to replace natural sources of nitrogen fertilizer such as mined deposits of bat guano, which were becoming depleted. But the Haber-Bosch process is very energy intensive: It requires temperatures of 400 degrees Celsius and pressures of 200 atmospheres, and this means it needs huge installations in order to be efficient. Some areas of the world, such as sub-Saharan Africa and Southeast Asia, have few or no such plants in operation.  As a result, the shortage or extremely high cost of fertilizer in these regions has limited their agricultural production.

The Haber-Bosch process “is good. It works,” Abate says. “Without it, we wouldn’t have been able to feed 2 out of the total 8 billion people in the world right now, he says, referring to the portion of the world’s population whose food is grown with ammonia-based fertilizers. But because of the emissions and energy demands, a better process is needed, he says.

Burning fuel to generate heat is responsible for about 20 percent of the greenhouse gases emitted from plants using the Haber-Bosch process. Making hydrogen accounts for the remaining 80 percent.  But ammonia, the molecule NH3, is made up only of nitrogen and hydrogen. There’s no carbon in the formula, so where do the carbon emissions come from? The standard way of producing the needed hydrogen is by processing methane gas with steam, breaking down the gas into pure hydrogen, which gets used, and carbon dioxide gas that gets released into the air.

Other processes exist for making low- or no-emissions hydrogen, such as by using solar or wind-generated electricity to split water into oxygen and hydrogen, but that process can be expensive. That’s why Abate and his team worked on developing a system to produce what they call geological hydrogen. Some places in the world, including some in Africa, have been found to naturally generate hydrogen underground through chemical reactions between water and iron-rich rocks. These pockets of naturally occurring hydrogen can be mined, just like natural methane reservoirs, but the extent and locations of such deposits are still relatively unexplored.

Abate realized this process could be created or enhanced by pumping water, laced with copper and nickel catalyst particles to speed up the process, into the ground in places where such iron-rich rocks were already present. “We can use the Earth as a factory to produce clean flows of hydrogen,” he says.

He recalls thinking about the problem of the emissions from hydrogen production for ammonia: “The ‘aha!’ moment for me was thinking, how about we link this process of geological hydrogen production with the process of making Haber-Bosch ammonia?”

That would solve the biggest problem of the underground hydrogen production process, which is how to capture and store the gas once it’s produced. Hydrogen is a very tiny molecule — the smallest of them all — and hard to contain. But by implementing the entire Haber-Bosch process underground, the only material that would need to be sent to the surface would be the ammonia itself, which is easy to capture, store, and transport.

The only extra ingredient needed to complete the process was the addition of a source of nitrogen, such as nitrate or nitrogen gas, into the water-catalyst mixture being injected into the ground. Then, as the hydrogen gets released from water molecules after interacting with the iron-rich rocks, it can immediately bond with the nitrogen atoms also carried in the water, with the deep underground environment providing the high temperatures and pressures required by the Haber-Bosch process. A second well near the injection well then pumps the ammonia out and into tanks on the surface.

“We call this geological ammonia,” Abate says, “because we are using subsurface temperature, pressure, chemistry, and geologically existing rocks to produce ammonia directly.”

Whereas transporting hydrogen requires expensive equipment to cool and liquefy it, and virtually no pipelines exist for its transport (except near oil refinery sites), transporting ammonia is easier and cheaper. It’s about one-sixth the cost of transporting hydrogen, and there are already more than 5,000 miles of ammonia pipelines and 10,000 terminals in place in the U.S. alone. What’s more, Abate explains, ammonia, unlike hydrogen, already has a substantial commercial market in place, with production volume projected to grow by two to three times by 2050, as it is used not only for fertilizer but also as feedstock for a wide variety of chemical processes.

For example, ammonia can be burned directly in gas turbines, engines, and industrial furnaces, providing a carbon-free alternative to fossil fuels. It is being explored for maritime shipping and aviation as an alternative fuel, and as a possible space propellant.

Another upside to geological ammonia is that untreated wastewater, including agricultural runoff, which tends to be rich in nitrogen already, could serve as the water source and be treated in the process. “We can tackle the problem of treating wastewater, while also making something of value out of this waste,” Abate says.

Gao adds that this process “involves no direct carbon emissions, presenting a potential pathway to reduce global CO2 emissions by up to 1 percent.” To arrive at this point, he says, the team “overcame numerous challenges and learned from many failed attempts. For example, we tested a wide range of conditions and catalysts before identifying the most effective one.”

The project was seed-funded under a flagship project of MIT’s Climate Grand Challenges program, the Center for the Electrification and Decarbonization of Industry. Professor Yet-Ming Chiang, co-director of the center, says “I don’t think there’s been any previous example of deliberately using the Earth as a chemical reactor. That’s one of the key novel points of this approach.”  Chiang emphasizes that even though it is a geological process, it happens very fast, not on geological timescales. “The reaction is fundamentally over in a matter of hours,” he says. “The reaction is so fast that this answers one of the key questions: Do you have to wait for geological times? And the answer is absolutely no.”

Professor Elsa Olivetti, a mission director of the newly established Climate Project at MIT, says, “The creative thinking by this team is invaluable to MIT’s ability to have impact at scale. Coupling these exciting results with, for example, advanced understanding of the geology surrounding hydrogen accumulations represent the whole-of-Institute efforts the Climate Project aims to support.”

“This is a significant breakthrough for the future of sustainable development,” says Geoffrey Ellis, a geologist at the U.S. Geological Survey, who was not associated with this work. He adds, “While there is clearly more work that needs to be done to validate this at the pilot stage and to get this to the commercial scale, the concept that has been demonstrated is truly transformative.  The approach of engineering a system to optimize the natural process of nitrate reduction by Fe2+ is ingenious and will likely lead to further innovations along these lines.”

The initial work on the process has been done in the laboratory, so the next step will be to prove the process using a real underground site. “We think that kind of experiment can be done within the next one to two years,” Abate says. This could open doors to using a similar approach for other chemical production processes, he adds.

The team has applied for a patent and aims to work towards bringing the process to market.

“Moving forward,” Gao says, “our focus will be on optimizing the process conditions and scaling up tests, with the goal of enabling practical applications for geological ammonia in the near future.”

The research team also included Ming Lei, Bachu Sravan Kumar, Hugh Smith, Seok Hee Han, and Lokesh Sangabattula, all at MIT. Additional funding was provided by the National Science Foundation and was carried out, in part, through the use of MIT.nano facilities.

Modeling complex behavior with a simple organism

The roundworm C. elegans is a simple animal whose nervous system has exactly 302 neurons. Each of the connections between those neurons has been comprehensively mapped, allowing researchers to study how they work together to generate the animal’s different behaviors.

Steven Flavell, an MIT associate professor of brain and cognitive sciences and investigator with the Picower Institute for Learning and Memory at MIT and the Howard Hughes Medical Institute, uses the worm as a model to study motivated behaviors such as feeding and navigation, in hopes of shedding light on the fundamental mechanisms that may also determine how similar behaviors are controlled in other animals.

In recent studies, Flavell’s lab has uncovered neural mechanisms underlying adaptive changes in the worms’ feeding behavior, and his lab has also mapped how the activity of each neuron in the animal’s nervous system affects the worms’ different behaviors.

Such studies could help researchers gain insight into how brain activity generates behavior in humans. “It is our aim to identify molecular and neural circuit mechanisms that may generalize across organisms,” he says, noting that many fundamental biological discoveries, including those related to programmed cell death, microRNA, and RNA interference, were first made in C. elegans.

“Our lab has mostly studied motivated state-dependent behaviors, like feeding and navigation. The machinery that’s being used to control these states in C. elegans — for example, neuromodulators — are actually the same as in humans. These pathways are evolutionarily ancient,” he says.

Drawn to the lab

Born in London to an English father and a Dutch mother, Flavell came to the United States in 1982 at the age of 2, when his father became chief scientific officer at Biogen. The family lived in Sudbury, Massachusetts, and his mother worked as a computer programmer and math teacher. His father later became a professor of immunology at Yale University.

Though Flavell grew up in a science family, he thought about majoring in English when he arrived at Oberlin College. A musician as well, Flavell took jazz guitar classes at Oberlin’s conservatory, and he also plays the piano and the saxophone. However, taking classes in psychology and physiology led him to discover that the field that most captivated him was neuroscience.

“I was immediately sold on neuroscience. It combined the rigor of the biological sciences with deep questions from psychology,” he says.

While in college, Flavell worked on a summer research project related to Alzheimer’s disease, in a lab at Case Western Reserve University. He then continued the project, which involved analyzing post-mortem Alzheimer’s tissue, during his senior year at Oberlin.

“My earliest research revolved around mechanisms of disease. While my research interests have evolved since then, my earliest research experiences were the ones that really got me hooked on working at the bench: running experiments, looking at brand new results, and trying to understand what they mean,” he says.

By the end of college, Flavell was a self-described lab rat: “I just love being in the lab.” He applied to graduate school and ended up going to Harvard Medical School for a PhD in neuroscience. Working with Michael Greenberg, Flavell studied how sensory experience and resulting neural activity shapes brain development. In particular, he focused on a family of gene regulators called MEF2, which play important roles in neuronal development and synaptic plasticity.

All of that work was done using mouse models, but Flavell transitioned to studying C. elegans during a postdoctoral fellowship working with Cori Bargmann at Rockefeller University. He was interested in studying how neural circuits control behavior, which seemed to be more feasible in simpler animal models.

“Studying how neurons across the brain govern behavior felt like it would be nearly intractable in a large brain — to understand all the nuts and bolts of how neurons interact with each other and ultimately generate behavior seemed daunting,” he says. “But I quickly became excited about studying this in C. elegans because at the time it was still the only animal with a full blueprint of its brain: a map of every brain cell and how they are all wired up together.”

That wiring diagram includes about 7,000 synapses in the entire nervous system. By comparison, a single human neuron may form more than 10,000 synapses. “Relative to those larger systems, the C. elegans nervous system is mind-bogglingly simple,” Flavell says.

Despite their much simpler organization, roundworms can execute complex behaviors such as feeding, locomotion, and egg-laying. They even sleep, form memories, and find suitable mating partners. The neuromodulators and cellular machinery that give rise to those behaviors are similar to those found in humans and other mammals.

The wormy C. elegans is seen in a microscope.

Photo: Bryce Vickmark


“C. elegans has a fairly well-defined, smallish set of behaviors, which makes it attractive for research. You can really measure almost everything that the animal is doing and study it,” Flavell says.

How behavior arises

Early in his career, Flavell’s work on C. elegans revealed the neural mechanisms that underlie the animal’s stable behavioral states. When worms are foraging for food, they alternate between stably exploring the environment and pausing to feed. “The transition rates between those states really depend on all these cues in the environment. How good is the food environment? How hungry are they? Are there smells indicating a better nearby food source? The animal integrates all of those things and then adjusts their foraging strategy,” Flavell says.

These stable behavioral states are controlled by neuromodulators like serotonin. By studying serotonergic regulation of the worm’s behavioral states, Flavell’s lab has been able to uncover how this important system is organized. In a recent study, Flavell and his colleagues published an “atlas” of the C. elegans serotonin system. They identified every neuron that produces serotonin, every neuron that has serotonin receptors, and how brain activity and behavior change across the animal as serotonin is released.

“Our studies of how the serotonin system works to control behavior have already revealed basic aspects of serotonin signaling that we think ought to generalize all the way up to mammals,” Flavell says. “By studying the way that the brain implements these long-lasting states, we can tap into these basic features of neuronal function. With the resolution that you can get studying specific C. elegans neurons and the way that they implement behavior, we can uncover fundamental features of the way that neurons act.”

In parallel, Flavell’s lab has also been mapping out how neurons across the C. elegans brain control different aspects of behavior. In a 2023 study, Flavell’s lab mapped how changes in brain-wide activity relate to behavior. His lab uses special microscopes that can move along with the worms as they explore, allowing them to simultaneously track every behavior and measure the activity of every neuron in the brain. Using these data, the researchers created computational models that can accurately capture the relationship between brain activity and behavior.

This type of research requires expertise in many areas, Flavell says. When looking for faculty jobs, he hoped to find a place where he could collaborate with researchers working in different fields of neuroscience, as well as scientists and engineers from other departments.

“Being at MIT has allowed my lab to be much more multidisciplinary than it could have been elsewhere,” he says. “My lab members have had undergrad degrees in physics, math, computer science, biology, neuroscience, and we use tools from all of those disciplines. We engineer microscopes, we build computational models, we come up with molecular tricks to perturb neurons in the C. elegans nervous system. And I think being able to deploy all those kinds of tools leads to exciting research outcomes.”

Explained: Generative AI’s environmental impact

In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.

The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.

The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.

Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.

Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.

“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.

Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.

Demanding data centers

The electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.

A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.

While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.

“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.

By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).

While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.

“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.

The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.

While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.

Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.

Increasing impacts from inference

Once a generative AI model is trained, the energy demands don’t disappear.

Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.

“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”

With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.

Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.

While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.

Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.

“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.

The computing hardware inside data centers brings its own, less direct environmental impacts.

While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.

There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.

Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.

The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.

He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.

“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says.