MIT researchers develop an efficient way to train more reliable AI agents

Fields ranging from robotics to medicine to political science are attempting to train AI systems to make meaningful decisions of all kinds. For example, using an AI system to intelligently control traffic in a congested city could help motorists reach their destinations faster, while improving safety or sustainability.

Unfortunately, teaching an AI system to make good decisions is no easy task.

Reinforcement learning models, which underlie these AI decision-making systems, still often fail when faced with even small variations in the tasks they are trained to perform. In the case of traffic, a model might struggle to control a set of intersections with different speed limits, numbers of lanes, or traffic patterns.

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them.

The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city.

By focusing on a smaller number of intersections that contribute the most to the algorithm’s overall effectiveness, this method maximizes performance while keeping the training cost low.

The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent.

“We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,” says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).

She is joined on the paper by lead author Jung-Hoon Cho, a CEE graduate student; Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and Sirui Li, an IDSS graduate student. The research will be presented at the Conference on Neural Information Processing Systems.

Finding a middle ground

To train an algorithm to control traffic lights at many intersections in a city, an engineer would typically choose between two main approaches. She can train one algorithm for each intersection independently, using only that intersection’s data, or train a larger algorithm using data from all intersections and then apply it to each one.

But each approach comes with its share of downsides. Training a separate algorithm for each task (such as a given intersection) is a time-consuming process that requires an enormous amount of data and computation, while training one algorithm for all tasks often leads to subpar performance.

Wu and her collaborators sought a sweet spot between these two approaches.

For their method, they choose a subset of tasks and train one algorithm for each task independently. Importantly, they strategically select individual tasks which are most likely to improve the algorithm’s overall performance on all tasks.

They leverage a common trick from the reinforcement learning field called zero-shot transfer learning, in which an already trained model is applied to a new task without being further trained. With transfer learning, the model often performs remarkably well on the new neighbor task.

“We know it would be ideal to train on all the tasks, but we wondered if we could get away with training on a subset of those tasks, apply the result to all the tasks, and still see a performance increase,” Wu says.

To identify which tasks they should select to maximize expected performance, the researchers developed an algorithm called Model-Based Transfer Learning (MBTL).

The MBTL algorithm has two pieces. For one, it models how well each algorithm would perform if it were trained independently on one task. Then it models how much each algorithm’s performance would degrade if it were transferred to each other task, a concept known as generalization performance.

Explicitly modeling generalization performance allows MBTL to estimate the value of training on a new task.

MBTL does this sequentially, choosing the task which leads to the highest performance gain first, then selecting additional tasks that provide the biggest subsequent marginal improvements to overall performance.

Since MBTL only focuses on the most promising tasks, it can dramatically improve the efficiency of the training process.

Reducing training costs

When the researchers tested this technique on simulated tasks, including controlling traffic signals, managing real-time speed advisories, and executing several classic control tasks, it was five to 50 times more efficient than other methods.

This means they could arrive at the same solution by training on far less data. For instance, with a 50x efficiency boost, the MBTL algorithm could train on just two tasks and achieve the same performance as a standard method which uses data from 100 tasks.

“From the perspective of the two main approaches, that means data from the other 98 tasks was not necessary or that training on all 100 tasks is confusing to the algorithm, so the performance ends up worse than ours,” Wu says.

With MBTL, adding even a small amount of additional training time could lead to much better performance.

In the future, the researchers plan to design MBTL algorithms that can extend to more complex problems, such as high-dimensional task spaces. They are also interested in applying their approach to real-world problems, especially in next-generation mobility systems.

The research is funded, in part, by a National Science Foundation CAREER Award, the Kwanjeong Educational Foundation PhD Scholarship Program, and an Amazon Robotics PhD Fellowship.

Advancing urban tree monitoring with AI-powered digital twins

The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”

What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.

“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”

Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.

The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.

A tree grows in Brooklyn (and many other places)

The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.

Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.

“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”

AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.

It’s a breeze

While Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.

“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”

The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.

The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.

“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”

Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month. 

Your child, the sophisticated language learner

As young children, how do we build our vocabulary? Even by age 1, many infants seem to think that if they hear a new word, it means something different from the words they already know. But why they think so has remained subject to inquiry among scholars for the last 40 years.

A new study carried out at the MIT Language Acquisition Lab offers a novel insight into the matter: Sentences contain subtle hints in their grammar that tell young children about the meaning of new words. The finding, based on experiments with 2-year-olds, suggests that even very young kids are capable of absorbing grammatical cues from language and leveraging that information to acquire new words.

“Even at a surprisingly young age, kids have sophisticated knowledge of the grammar of sentences and can use that to learn the meanings of new words,” says Athulya Aravind, an associate professor of linguistics at MIT.

The new insight stands in contrast to a prior explanation for how children build vocabulary: that they rely on the concept of “mutual exclusivity,” meaning they treat each new word as corresponding to a new object or category. Instead, the new research shows how extensively children respond directly to grammatical information when interpreting words.

“For us it’s very exciting because it’s a very simple idea that explains so much about how children understand language,” says Gabor Brody, a postdoc at Brown University, who is the first author of the paper.

The paper is titled, “Why Do Children Think Words Are Mutually Exclusive?” It is published in advance online form in Psychological Science. The authors are Brody; Roman Feiman, the Thomas J. and Alice M. Tisch Assistant Professor of Cognitive and Psychological Sciences and Linguistics at Brown; and Aravind, the Alfred Henry and Jean Morrison Hayes Career Development Associate Professor in MIT’s Department of Linguistics and Philosophy.

Focusing on focus

Many scholars have thought that young children, when learning new words, have an innate bias toward mutual exclusivity, which could explain how children learn some of their new words. However, the concept of mutual exclusivity has never been airtight: Words like “bat” refer to multiple kinds of objects, while any object can be described using countlessly many words. For instance a rabbit can be called not only a “rabbit” or a “bunny,” but also an “animal,” or a “beauty,” and in some contexts even a “delicacy.” Despite this lack of perfect one-to-one mapping between words and objects, mutual exclusivity has still been posited as a strong tendency in children’s word learning.

What Aravind, Brody, and Fieman propose is that children have no such tendency, and instead rely on so-called “focus” signals to decide what a new word means. Linguists use the term “focus” to refer to the way we emphasize or stress certain words to signal some kind of contrast. Depending on what is focused, the same sentence can have different implications. “Carlos gave Lewis a Ferrari” implies contrast with other possible cars — he could have given Lewis a Mercedes. But “Carlos gave Lewis a Ferrari” implies contrast with other people — he could have given Alexandra a Ferrari.

The researchers’ experiments manipulated focus in three experiments with a total of 106 children. The participants watched videos of a cartoon fox who asked them to point to different objects.

The first experiment established how focus influences kids’ choice between two objects when they hear a label, like “toy,” that could, in principle, correspond to either of the two. After giving a name to one of the two objects (“Look, I am pointing to the blicket”), the fox told the child, “Now you point to the toy!” Children were divided into two groups. One group heard “toy” without emphasis, while the other heard it with emphasis.

In the first version, “blicket” and “toy” plausibly refer to the same object. But in the second version, the added focus, through intonation, implies that “toy” contrasts with the previously discussed “blicket.” Without focus, only 24 percent of the respondents thought the words were mutually exclusive, whereas with the focus created by emphasizing “toy,” 89 percent of participants thought “blicket” and “toy” referred to different objects.

The second and third experiments showed that focus is not just key when it comes to words like “toy,” but it also affects the interpretation of new words children have never encountered before, like “wug” or “dax.” If a new word was said without focus, children thought the word meant the previously named object 71 percent of the time. But when hearing the new word spoken with focus, they thought it must refer to a new object 87 percent of the time.

“Even though they know nothing about this new word, when it was focused, that still told them something: Focus communicated to children the presence of a contrasting alternative, and they correspondingly understood the noun to refer to an object that had not previously been labeled,” Aravind explains.

She adds: “The particular claim we’re making is that there is no inherent bias in children toward mutual exclusivity. The only reason we make the corresponding inference is because focus tells you that the word means something different from another word. When focus goes away, children don’t draw those exclusivity inferences any more.”

The researchers believe the full set of experiments sheds new light on the issue.

“Earlier explanations of mutual exclusivity introduced a whole new problem,” Feiman says. “If kids assume words are mutually exclusive, how do they learn words that are not? After all, you can call the same animal either a rabbit or a bunny, and kids have to learn both of those at some point. Our finding explains why this isn’t actually a problem. Kids won’t think the new word is mutually exclusive with the old word by default, unless adults tell them that it is — all adults have to do if the new word is not mutually exclusive is just say it without focusing it, and they’ll naturally do that if they’re thinking about it as compatible.”

Learning language from language

The experiment, the researchers note, is the result of interdisciplinary research bridging psychology and linguistics — in this case, mobilizing the linguistics concept of focus to address an issue of interest in both fields.

“We are hopeful this will be a paper that shows that small, simple theories have a place in psychology,” Brody says. “It is a very small theory, not a huge model of the mind, but it completely flips the switch on some phenomena we thought we understood.”

If the new hypothesis is correct, the researchers may have developed a more robust explanation about how children correctly apply new words.

“An influential idea in language development is that children can use their existing knowledge of language to learn more language,” Aravind says. “We’re in a sense building on that idea, and saying that even in the simplest cases, aspects of language that children already know, in this case an understanding of focus, help them grasp the meanings of unknown words.”

The scholars acknowledge that more studies could further advance our knowledge about the issue. Future research, they note in the paper, could reexamine prior studies about mutual exclusivity, record and study naturalistic interactions between parents and children to see how focus is used, and examine the issue in other languages, especially those marking focus in alternate ways, such as word order.

The research was supported, in part, by a Jacobs Foundation Fellowship awarded to Feiman.

3 Questions: Claire Wang on training the brain for memory sports

On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.

The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.

The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.

MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.

Q: How did you come to get involved in memory competitions?

A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.

Q: What was the Tournament of Memory Champions like?

A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.

Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.

Since I hadn’t done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.

Q: What is your approach to improving memory?

A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.

Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.

The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot. 

A nonflammable battery to power a safer, decarbonized future

Lithium-ion batteries are the workhorses of home electronics and are powering an electric revolution in transportation. But they are not suitable for every application.

A key drawback is their flammability and toxicity, which make large-scale lithium-ion energy storage a bad fit in densely populated city centers and near metal processing or chemical manufacturing plants.

Now Alsym Energy has developed a nonflammable, nontoxic alternative to lithium-ion batteries to help renewables like wind and solar bridge the gap in a broader range of sectors. The company’s electrodes use relatively stable, abundant materials, and its electrolyte is primarily water with some nontoxic add-ons.

“Renewables are intermittent, so you need storage, and to really solve the decarbonization problem, we need to be able to make these batteries anywhere at low cost,” says Alsym co-founder and MIT Professor Kripa Varanasi.

The company believes its batteries, which are currently being tested by potential customers around the world, hold enormous potential to decarbonize the high-emissions industrial manufacturing sector, and they see other applications ranging from mining to powering data centers, homes, and utilities.

“We are enabling a decarbonization of markets that was not possible before,” Alsym co-founder and CEO Mukesh Chatter says. “No chemical or steel plant would dare put a lithium battery close to their premises because of the flammability, and industrial emissions are a much bigger problem than passenger cars. With this approach, we’re able to offer a new path.”

Helping 1 billion people

Chatter started a telecommunications company with serial entrepreneurs and longtime members of the MIT community Ray Stata ’57, SM ’58 and Alec Dingee ’52 in 1997. Since the company was acquired in 1999, Chatter and his wife have started other ventures and invested in some startups, but after losing his mother to cancer in 2012, Chatter decided he wanted to maximize his impact by only working on technologies that could reach 1 billion people or more.

The problem Chatter decided to focus on was electricity access.

“The intent was to light up the homes of at least 1 billion people around the world who either did not have electricity, or only got it part of the time, condemning them basically to a life of poverty in the 19th century,” Chatter says. “When you don’t have access to electricity, you also don’t have the internet, cell phones, education, etc.”

To solve the problem, Chatter decided to fund research into a new kind of battery. The battery had to be cheap enough to be adopted in low-resource settings, safe enough to be deployed in crowded areas, and work well enough to support two light bulbs, a fan, a refrigerator, and an internet modem.

At first, Chatter was surprised how few takers he had to start the research, even from researchers at the top universities in the world.

“It’s a burning problem, but the risk of failure was so high that nobody wanted to take the chance,” Chatter recalls.

He finally found his partners in Varanasi, Rensselaer Polytechnic Institute Professor Nikhil Koratkar and Rensselaer researcher Rahul Mukherjee. Varanasi, who notes he’s been at MIT for 22 years, says the Institute’s culture gave him the confidence to tackle big problems.

“My students, postdocs, and colleagues are inspirational to me,” he says. “The MIT ecosystem infuses us with this resolve to go after problems that look insurmountable.”

Varanasi leads an interdisciplinary lab at MIT dedicated to understanding physicochemical and biological phenomena. His research has spurred the creation of materials, devices, products, and processes to tackle challenges in energy, agriculture, and other sectors, as well as startup companies to commercialize this work.

“Working at the interfaces of matter has unlocked numerous new research pathways across various fields, and MIT has provided me the creative freedom to explore, discover, and learn, and apply that knowledge to solve critical challenges,” he says. “I was able to draw significantly from my learnings as we set out to develop the new battery technology.”

Alsym’s founding team began by trying to design a battery from scratch based on new materials that could fit the parameters defined by Chatter. To make it nonflammable and nontoxic, the founders wanted to avoid lithium and cobalt.

After evaluating many different chemistries, the founders settled on Alsym’s current approach, which was finalized in 2020.

Although the full makeup of Alsym’s battery is still under wraps as the company waits to be granted patents, one of Alsym’s electrodes is made mostly of manganese oxide while the other is primarily made of a metal oxide. The electrolyte is primarily water.

There are several advantages to Alsym’s new battery chemistry. Because the battery is inherently safer and more sustainable than lithium-ion, the company doesn’t need the same safety protections or cooling equipment, and it can pack its batteries close to each other without fear of fires or explosions. Varanasi also says the battery can be manufactured in any of today’s lithium-ion plants with minimal changes and at significantly lower operating cost.

“We are very excited right now,” Chatter says. “We started out wanting to light up 1 billion people’s homes, and now in addition to the original goal we have a chance to impact the entire globe if we are successful at cutting back industrial emissions.”

A new platform for energy storage

Although the batteries don’t quite reach the energy density of lithium-ion batteries, Varanasi says Alsym is first among alternative chemistries at the system-level. He says 20-foot containers of Alsym’s batteries can provide 1.7 megawatt hours of electricity. The batteries can also fast-charge over four hours and can be configured to discharge over anywhere from two to 110 hours.

“We’re highly configurable, and that’s important because depending on where you are, you can sometimes run on two cycles a day with solar, and in combination with wind, you could truly get 24/7 electricity,” Chatter says. “The need to do multiday or long duration storage is a small part of the market, but we support that too.”

Alsym has been manufacturing prototypes at a small facility in Woburn, Massachusetts, for the last two years, and early this year it expanded its capacity and began to send samples to customers for field testing.

In addition to large utilities, the company is working with municipalities, generator manufacturers, and providers of behind-the-meter power for residential and commercial buildings. The company is also in discussion with a large chemical manufacturers and metal processing plants to provide energy storage system to reduce their carbon footprint, something they say was not feasible with lithium-ion batteries, due to their flammability, or with nonlithium batteries, due to their large space requirements.

Another critical area is data centers. With the growth of AI, the demand for data centers — and their energy consumption — is set to surge.

“We must power the AI and digitization revolution without compromising our planet,” says Varanasi, adding that lithium batteries are unsuitable for co-location with data centers due to flammability risks. “Alsym batteries are well-positioned to offer a safer, more sustainable alternative. Intermittency is also a key issue for electrolyzers used in green hydrogen production and other markets.”

Varanasi sees Alsym as a platform company, and Chatter says Alsym is already working on other battery chemistries that have higher densities and maintain performance at even more extreme temperatures.

“When you use a single material in any battery, and the whole world starts to use it, you run out of that material,” Varanasi says. “What we have is a platform that has enabled us to not just to come up with just one chemistry, but at least three or four chemistries targeted at different applications so no one particular set of materials will be stressed in terms of supply.”

Tunable ultrasound propagation in microscale metamaterials

Acoustic metamaterials — architected materials that have tailored geometries designed to control the propagation of acoustic or elastic waves through a medium — have been studied extensively through computational and theoretical methods. Physical realizations of these materials to date have been restricted to large sizes and low frequencies.

“The multifunctionality of metamaterials — being simultaneously lightweight and strong while having tunable acoustic properties — make them great candidates for use in extreme-condition engineering applications,” explains Carlos Portela, the Robert N. Noyce Career Development Chair and assistant professor of mechanical engineering at MIT. “But challenges in miniaturizing and characterizing acoustic metamaterials at high frequencies have hindered progress towards realizing advanced materials that have ultrasonic-wave control capabilities.”

A new study coauthored by Portela; Rachel Sun, Jet Lem, and Yun Kai of the MIT Department of Mechanical Engineering (MechE); and Washington DeLima of the U.S. Department of Energy Kansas City National Security Campus presents a design framework for controlling ultrasound wave propagation in microscopic acoustic metamaterials. A paper on the work, “Tailored Ultrasound Propagation in Microscale Metamaterials via Inertia Design,” was recently published in the journal Science Advances. 

“Our work proposes a design framework based on precisely positioning microscale spheres to tune how ultrasound waves travel through 3D microscale metamaterials,” says Portela. “Specifically, we investigate how placing microscopic spherical masses within a metamaterial lattice affect how fast ultrasound waves travel throughout, ultimately leading to wave guiding or focusing responses.”

Through nondestructive, high-throughput laser-ultrasonics characterization, the team experimentally demonstrates tunable elastic-wave velocities within microscale materials. They use the varied wave velocities to spatially and temporally tune wave propagation in microscale materials, also demonstrating an acoustic demultiplexer (a device that separates one acoustic signal into multiple output signals). The work paves the way for microscale devices and components that could be useful for ultrasound imaging or information transmission via ultrasound.

“Using simple geometrical changes, this design framework expands the tunable dynamic property space of metamaterials, enabling straightforward design and fabrication of microscale acoustic metamaterials and devices,” says Portela.

The research also advances experimental capabilities, including fabrication and characterization, of microscale acoustic metamaterials toward application in medical ultrasound and mechanical computing applications, and underscores the underlying mechanics of ultrasound wave propagation in metamaterials, tuning dynamic properties via simple geometric changes and describing these changes as a function of changes in mass and stiffness. More importantly, the framework is amenable to other fabrication techniques beyond the microscale, requiring merely a single constituent material and one base 3D geometry to attain largely tunable properties.

“The beauty of this framework is that it fundamentally links physical material properties to geometric features. By placing spherical masses on a spring-like lattice scaffold, we could create direct analogies for how mass affects quasi-static stiffness and dynamic wave velocity,” says Sun, first author of the study. “I realized that we could obtain hundreds of different designs and corresponding material properties regardless of whether we vibrated or slowly compressed the materials.”

Reality check on technologies to remove carbon dioxide from the air

In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.

To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.

DAC: The promise and the reality

Including DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). 

DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.

Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.

Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.

Challenge 1: Scaling up

When it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”

Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.

Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”

Challenge 2: Energy requirement

Given the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.

The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.

Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.

Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.

Challenge 3: Siting

Some analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”

Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.

Challenge 4: Cost

Considering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.

To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.

As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.

Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.

Then there’s the cost of storage, which is ignored in many DAC cost estimates.

Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.

The bottom line

In their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.

The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.

Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.”

20+ Best Slideshow & Photo Gallery Templates for DaVinci Resolve – Speckyboy

Slideshows and photo galleries are a great addition to any video presentation. They serve as a storytelling vehicle and a way to keep viewers interested. You can also use them to transition to a new scene.

These segments work wonderfully as the star of the show or as a bit player. Their flexibility is handy for product videos, documentaries, event recaps, and more.

Creating a slideshow or gallery from scratch can be time-consuming, though. Constructing a scene for your photos and adding effects will slow down even experienced video editors.

That’s why we love these DaVinci Resolve templates. All the hard work has already been done for you. They offer professional-grade effects and are easy to customize.

Add your photos and perhaps a bit of text. The result is a top-notch presentation that is sure to impress.

Look below and see which templates can improve your next video project.

Here’s a fun way to display your photos. This template includes an elaborate scene featuring your images hanging from a clothesline. It’s a unique effect that will have viewers talking. It is a perfect choice for family photo albums.

Family Photo DaVinci Resolve Slideshow Template

Show off your best work with a portfolio video slideshow template. Inside, you’ll find a place to list your skills, biography, and contact information. There’s also space to add examples of your work.

Portfolio Slideshow DaVinci Resolve Template

This video template is designed to help you relive the best moments of your vacation. It’s also a great choice for travel bloggers or hospitality companies. It features vintage film effects and sunny transitions.

Holiday Slideshow for DaVinci Resolve

Use this flexible template to recap a recent event or create a corporate presentation. Its clean, modern style also works well for video introductions. You’ll find plenty of color and bold typography here.

Event Slideshow Template for DaVinci Resolve

Here’s a slideshow that blends modern and classic looks. Polaroid-inspired photos scroll by – complete with social media icons as decoration. Beautiful lens-flare effects are included to add a professional touch. Add your travel or family photos and enjoy.

Travel Photo Slideshow for DaVinci

This template features a simple and beautiful layout in a square viewport. Photos are highlighted with a variety of border shapes and backgrounds. You’ll also find plenty of smooth animations and fun special effects.

Minimalist Photo Gallery for DaVinci Resolve

Are you looking for a unique effect? This template includes awe-inspiring 3D parallax animation. It adds depth and a new perspective to your static images. Choose from three macro presets to create just the right look.

3D Photo Gallery DaVinci Resolve Template

Bring a retro vibe to your videos with a Polaroid slideshow. Place your photos within the iconic frame and evoke memories of good times. Use it for family photos, reunions, or anywhere else you want to spread cheer.

Polaroid Slideshow Template for DaVinci Resolve

This wedding slideshow template will help you share memories from a special day. The package includes stunning reveal effects and a place for captions. The happy couple, friends, and family will be amazed at the results.

Wedding Slideshow Template for DaVinci Resolve

Introduce your team via this slick corporate slideshow. The template is modular and easy to customize with photos and text. The included vertical and horizontal versions help you target mobile and desktop devices.

Corporate Slideshow for DaVinci Resolve

This vintage slideshow template adds a classic cinematic look to any photo or video. The effects and typography used here are perfect for celebrating the past. It’s like a bit of Hollywood magic is within your reach.

Vintage Slideshow DaVinci Resolve Template

Want to add bold colors to your video? Check out this eye-catching magazine template. See colorful blocks come together as your images and text are displayed. There’s a lot of modern charm for viewers to admire.

Magazine Style DaVinci Resolve Slideshow

Here’s proof of how powerful a filmstrip can be. Add your photos to this template with vintage film effects and frayed borders. A gentle scrolling effect is easy on the eyes and creates a classic presentation style.

Film Strip Slideshow for DaVinci

Fun and unique, this slideshow template will add personality to your photos. A mix of geometric shapes and exciting animation effects make a compelling result. There’s also plenty of space for adding custom text.

Abstract Slideshow for DaVinci Resolve

Parallax effects are popular in web design but also great in video production. This video slideshow template makes adding the effect to your photos easy. You’ll find three versions included with room for dozens of photos.

Parallax Slideshow Template for DaVinci Resolve

Use this template to feature your action shots or adventure videos. It’s a fast-paced presentation that goes well with sports, outdoor lifestyle, or health-related presentations. There’s enough energy here to inspire viewers to get up and moving.

Dynamic Slideshow for DaVinci Resolve

Historical photos are a perfect fit for this photo gallery template. The included effects evoke the past with film textures and dreamlike animations. You might use this one for a video timeline or family history project.

Retro Photo Gallery Template for DaVinci Resolve

Those looking to add dramatic flair to their photos should look no further. This cinematic slideshow includes top-notch professional effects that keep viewers glued to their screens. Use it for introductions, closings, or teaser videos. The possibilities are pretty much endless!

Cinematic Slideshow for DaVinci Resolve

Share your good times with the world using this social media template. It features a vertical viewport for easy viewing on mobile devices. There are also fun shapes and effects for dressing up your images.

Social Media DaVinci Resolve Slideshow

This template is built to help you share your company’s history. Create a beautiful timeline, complete with professional animation and an easy-to-read layout. It would make a great feature on your website’s “About Us” page or as an introduction to a corporate presentation.

Year-in-Review Slideshow Template for DaVinci Resolve

An Easy Way to Add Custom Galleries and Slideshows

You don’t need to be a pro to create a high-quality video gallery or slideshow. The templates in this collection offer a range of styles in an easy-to-edit format. Everything from classic to modern is available.

Choose your favorites from our collection and download them. You may find yourself using them again and again.


Related Topics


Top

A bioinspired capsule can pump drugs directly into the walls of the GI tract

Inspired by the way that squids use jets to propel themselves through the ocean and shoot ink clouds, researchers from MIT and Novo Nordisk have developed an ingestible capsule that releases a burst of drugs directly into the wall of the stomach or other organs of the digestive tract.

This capsule could offer an alternative way to deliver drugs that normally have to be injected, such as insulin and other large proteins, including antibodies. This needle-free strategy could also be used to deliver RNA, either as a vaccine or a therapeutic molecule to treat diabetes, obesity, and other metabolic disorders.

“One of the longstanding challenges that we’ve been exploring is the development of systems that enable the oral delivery of macromolecules that usually require an injection to be administered. This work represents one of the next major advances in that progression,” says Giovanni Traverso, director of the Laboratory for Translational Engineering and an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and the senior author of the study.

Traverso and his students at MIT developed the new capsule along with researchers at Brigham and Women’s Hospital and Novo Nordisk. Graham Arrick SM ’20 and Novo Nordisk scientists Drago Sticker and Aghiad Ghazal are the lead authors of the paper, which appears today in Nature.

Inspired by cephalopods

Drugs that consist of large proteins or RNA typically can’t be taken orally because they are easily broken down in the digestive tract. For several years, Traverso’s lab has been working on ways to deliver such drugs orally by encapsulating them in small devices that protect the drugs from degradation and then inject them directly into the lining of the digestive tract.

Most of these capsules use a small needle or set of microneedles to deliver drugs once the device arrives in the digestive tract. In the new study, Traverso and his colleagues wanted to explore ways to deliver these molecules without any kind of needle, which could reduce the possibility of any damage to the tissue.

To achieve that, they took inspiration from cephalopods. Squids and octopuses can propel themselves by filling their mantle cavity with water, then rapidly expelling it through their siphon. By changing the force of water expulsion and pointing the siphon in different directions, the animals can control their speed and direction of travel. The siphon organ also allows cephalopods to shoot jets of ink, forming decoy clouds to distract predators.

The researchers came up with two ways to mimic this jetting action, using compressed carbon dioxide or tightly coiled springs to generate the force needed to propel liquid drugs out of the capsule. The gas or spring is kept in a compressed state by a carbohydrate trigger, which is designed to dissolve when exposed to humidity or an acidic environment such as the stomach. When the trigger dissolves, the gas or spring is allowed to expand, propelling a jet of drugs out of the capsule.

In a series of experiments using tissue from the digestive tract, the researchers calculated the pressures needed to expel the drugs with enough force that they would penetrate the submucosal tissue and accumulate there, creating a depot that would then release drugs into the tissue.

“Aside from the elimination of sharps, another potential advantage of high-velocity columnated jets is their robustness to localization issues. In contrast to a small needle, which needs to have intimate contact with the tissue, our experiments indicated that a jet may be able to deliver most of the dose from a distance or at a slight angle,” Arrick says.

The researchers also designed the capsules so that they can target different parts of the digestive tract. One version of the capsule, which has a flat bottom and a high dome, can sit on a surface, such as the lining of the stomach, and eject drug downward into the tissue. This capsule, which was inspired by previous research from Traverso’s lab on self-orienting capsules, is about the size of a blueberry and can carry 80 microliters of drug.

The second version has a tube-like shape that allows it to align itself within a long tubular organ such as the esophagus or small intestine. In that case, the drug is ejected out toward the side wall, rather than downward. This version can deliver 200 microliters of drug.

Made of metal and plastic, the capsules can pass through the digestive tract and are excreted after releasing their drug payload.

Needle-free drug delivery

In tests in animals, the researchers showed that they could use these capsules to deliver insulin, a GLP-1 receptor agonist similar to the diabetes drug Ozempic, and a type of RNA called short interfering RNA (siRNA). This type of RNA can be used to silence genes, making it potentially useful in treating many genetic disorders.

They also showed that the concentration of the drugs in the animals’ bloodstream reached levels on the same order of magnitude as those seen when the drugs were injected with a syringe, and they did not detect any tissue damage.

The researchers envision that the ingestible capsule could be used at home by patients who need to take insulin or other injected drugs frequently. In addition to making it easier to administer drugs, especially for patients who don’t like needles, this approach also eliminates the need to dispose of sharp needles. The researchers also created and tested a version of the device that could be attached to an endoscope, allowing doctors to use it in an endoscopy suite or operating room to deliver drugs to a patient.

“This technology is a significant leap forward in oral drug delivery of macromolecule drugs like insulin and GLP-1 agonists. While many approaches for oral drug delivery have been attempted in the past, they tend to be poorly efficient in achieving high bioavailability. Here, the researchers demonstrate the ability to deliver bioavailability in animal models with high efficiency. This is an exciting approach which could be impactful for many biologics which are currently administered through injections or intravascular infusions,” says Omid Veiseh, a professor of bioengineering at Rice University, who was not involved in the research.

The researchers now plan to further develop the capsules, in hopes of testing them in humans.

The research was funded by Novo Nordisk, the Natural Sciences and Engineering Research Council of Canada, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and the U.S. Advanced Research Projects Agency for Health.

Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting in 2025

Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting next fall, thanks to newly expanded financial aid. Eighty percent of American households meet this income threshold.

And for the 50 percent of American families with income below $100,000, parents can expect to pay nothing at all toward the full cost of their students’ MIT education, which includes tuition as well as housing, dining, fees, and an allowance for books and personal expenses.

This $100,000 threshold is up from $75,000 this year, while next year’s $200,000 threshold for tuition-free attendance will increase from its current level of $140,000.

These new steps to enhance MIT’s affordability for students and families are the latest in a long history of efforts by the Institute to free up more resources to make an MIT education as affordable and accessible as possible. Toward that end, MIT has earmarked $167.3 million in need-based financial aid this year for undergraduate students — up some 70 percent from a decade ago.

“MIT’s distinctive model of education — intense, demanding, and rooted in science and engineering — has profound practical value to our students and to society,” MIT President Sally Kornbluth says. “As the Wall Street Journal recently reported, MIT is better at improving the financial futures of its graduates than any other U.S. college, and the Institute also ranks number one in the world for the employability of its graduates.” 

“The cost of college is a real concern for families across the board,” Kornbluth adds, “and we’re determined to make this transformative educational experience available to the most talented students, whatever their financial circumstances. So, to every student out there who dreams of coming to MIT: Don’t let concerns about cost stand in your way.”

MIT is one of only nine colleges in the US that does not consider applicants’ ability to pay as part of its admissions process and that meets the full demonstrated financial need ⁠for all undergraduates. MIT does not expect students on aid to take loans, and, unlike many other institutions, MIT does not provide an admissions advantage to the children of alumni or donors. Indeed, 18 percent of current MIT undergraduates are first-generation college students.

“We believe MIT should be the preeminent destination for the most talented students in the country interested in an education centered on science and technology, and accessible to the best students regardless of their financial circumstances,” says Stu Schmill, MIT’s dean of admissions and student financial services.

“With the need-based financial aid we provide today, our education is much more affordable now than at any point in the past,” adds Schmill, who graduated from MIT in 1986, “even though the ‘sticker price’ of MIT is higher now than it was when I was an undergraduate.”

Last year, the median annual cost paid by an MIT undergraduate receiving financial aid was $12,938⁠, allowing 87 percent of students in the Class of 2024 to graduate debt-free. Those who did borrow graduated with median debt of $14,844. At the same time, graduates benefit from the lifelong value of an MIT degree, with an average starting salary of $126,438 for graduates entering industry, according to MIT’s most recent survey of its graduating students.

MIT’s endowment — made up of generous gifts made by individual alumni and friends — allows the Institute to provide this level of financial aid, both now and into the future.

“Today’s announcement is a powerful expression of how much our graduates value their MIT experience,” Kornbluth says, “because our ability to provide financial aid of this scope depends on decades of individual donations to our endowment, from generations of MIT alumni and other friends. In effect, our endowment is an inter-generational gift from past MIT students to the students of today and tomorrow.”

What MIT families can expect in 2025

As noted earlier: Starting next fall, for families with income below $100,000, with typical assets, parents can expect to pay nothing for the full cost of attendance, which includes tuition, housing, dining, fees, and allowances for books and personal expenses.

For families with income from $100,000 to $200,000, with typical assets, parents can expect to pay on a sliding scale from $0 up to a maximum of around $23,970, which is this year’s total cost for MIT housing, dining, fees, and allowances for books and personal expenses.

Put another way, next year all MIT families with income below $200,000 can expect to contribute well below $27,146, which is the annual average cost for in-state students to attend and live on campus at public universities in the US, according to the Education Data Initiative. And even among families with income above $200,000, many still receive need-based financial aid from MIT, based on their unique financial circumstances. Families can use MIT’s online calculators to estimate the cost of attendance for their specific family.

This past summer, MIT’s faculty-led Committee on Undergraduate Admissions and Financial Aid was publicly charged by President Kornbluth with undertaking a review of the Institute’s admissions and financial aid policies, to ensure that MIT remains as fully accessible as possible to all students, regardless of their financial circumstances. The steps announced today are the first of these recommendations to be reviewed and adopted.