Making classical music and math more accessible

Senior Holden Mui appreciates the details in mathematics and music. A well-written orchestral piece and a well-designed competitive math problem both require a certain flair and a well-tuned sense of how to keep an audience’s interest.

“People want fresh, new, non-recycled approaches to math and music,” he says. Mui sees his role as a guide of sorts, someone who can take his ideas for a musical composition or a math problem and share them with audiences in an engaging way. His ideas must make the transition from his mind to the page in as precise a way as possible. Details matter.

A double major in math and music from Lisle, Illinois, Mui believes it’s important to invite people into a creative process that allows a kind of conversation to occur between a piece of music he writes and his audience, for example. Or a math problem and the people who try to solve it. “Part of math’s appeal is its ability to reveal deep truths that may be hidden in simple statements,” he argues, “while contemporary classical music should be available for enjoyment by as many people as possible.”

Mui’s first experience at MIT was as a high school student in 2017. He visited as a member of a high school math competition team attending an event hosted and staged by MIT and Harvard University students. The following year, Mui met other students at math camps and began thinking seriously about what was next.

“I chose math as a major because it’s been a passion of mine since high school. My interest grew through competitions and continued to develop it through research,” he says. “I chose MIT because it boasts one of the most rigorous and accomplished mathematics departments in the country.”

Mui is also a math problem writer for the Harvard-MIT Math Tournament (HMMT) and performs with Ribotones, a club that travels to places like retirement homes or public spaces on the Institute’s campus to play music for free. He cites French composer Maurice Ravel as one of his major musical influences.

Mui studies piano with Timothy McFarland, an artist affiliate at MIT, through the MIT Emerson/Harris Fellowship Program, and previously studied with Kate Nir and Matthew Hagle of the Music Institute of Chicago. He started piano at the age of five and cites French composer Maurice Ravel as one of his major musical influences.

As a music student at MIT, Mui is involved in piano performance, chamber music, collaborative piano, the MIT Symphony Orchestra as a violist, conducting, and composition.

He enjoys the incredible variety available within MIT’s music program. “It offers everything from electronic music to world music studies,” he notes, “and has broadened my understanding and appreciation of music’s diversity.”

Collaborating to create

Throughout his academic career, Mui found himself among like-minded students like former Yale University undergraduate Andrew Wu. Together, Mui and Wu won an Emergent Ventures grant. In this collaboration, Mui wrote the music Wu would play. Wu described his experience with one of Mui’s compositions, “Poetry,” as “demanding serious focus and continued re-readings,” yielding nuances even after repeated listens.

Another of Mui’s compositions, “Landscapes,” was performed by MIT’s Symphony Orchestra in October 2024 and offered audiences opportunities to engage with the ideas he explores in his music.

One of the challenges Mui discovered early is that academic composers sometimes create music audiences might struggle to understand. “People often say that music is a universal language, but one of the most valuable insights I’ve gained at MIT is that music isn’t as universally experienced as one might think,” he says. “There are notable differences, for example, between Western music and world music.” 

This, Mui says, broadened his perspective on how to approach music and encouraged him to consider his audience more closely when composing. He treats music as an opportunity to invite people into how he thinks. 

Creative ideas, accessible outcomes

Mui understands the value of sharing his skills and ideas with others, crediting the MIT International Science and Technology Initiatives (MISTI) program with offering multiple opportunities for travel and teaching. “I’ve been on three MISTI trips during IAP [Independent Activities Period] to teach mathematics,” he says. 

Mui says it’s important to be flexible, dynamic, and adaptable in preparation for a fulfilling professional life. Music and math both demand the development of the kinds of soft skills that can help him succeed as a musician, composer, and mathematician.

“Creating math problems is surprisingly similar to writing music,” he argues. “In both cases, the work needs to be complex enough to be interesting without becoming unapproachable.” For Mui, designing original math problems is “like trying to write down an original melody.”

“To write math problems, you have to have seen a lot of math problems before. To write music, you have to know the literature — Bach, Beethoven, Ravel, Ligeti — as diverse a group of personalities as possible.”

A future in the notes and numbers

Mui points to the professional and personal virtues of exploring different fields. “It allows me to build a more diverse network of people with unique perspectives,” he says. “Professionally, having a range of experiences and viewpoints to draw on is invaluable; the broader my knowledge and network, the more insights I can gain to succeed.”

After graduating, Mui plans to pursue doctoral study in mathematics following the completion of a cryptography internship. “The connections I’ve made at MIT, and will continue to make, are valuable because they’ll be useful regardless of the career I choose,” he says. He wants to continue researching math he finds challenging and rewarding. As with his music, he wants to strike a balance between emotion and innovation.

“I think it’s important not to pull all of one’s eggs in one basket,” he says. “One important figure that comes to mind is Isaac Newton, who split his time among three fields: physics, alchemy, and theology.” Mui’s path forward will inevitably include music and math. Whether crafting compositions or designing math problems, Mui seeks to invite others into a world where notes and numbers converge to create meaning, inspire connection, and transform understanding.

MIT welcomes Frida Polli as its next visiting innovation scholar

Frida Polli, a neuroscientist, entrepreneur, investor, and inventor known for her leading-edge contributions at the crossroads of behavioral science and artificial intelligence, is MIT’s new visiting innovation scholar for the 2024-25 academic year. She is the first visiting innovation scholar to be housed within the MIT Schwarzman College of Computing.

Polli began her career in academic neuroscience with a focus on multimodal brain imaging related to health and disease. She was a fellow at the Psychiatric Neuroimaging Group at Mass General Brigham and Harvard Medical School. She then joined the Department of Brain and Cognitive Sciences at MIT as a postdoc, where she worked with John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences.

Her research has won many awards, including a Young Investigator Award from the Brain and Behavior Research Foundation. She authored over 30 peer-reviewed articles, with notable publications in the Proceedings of the National Academy of Sciences, the Journal of Neuroscience, and Brain. She transitioned from academia to entrepreneurship by completing her MBA at the Harvard Business School (HBS) as a Robert Kaplan Life Science Fellow. During this time, she also won the Life Sciences Track and the Audience Choice Award in the 2010 MIT $100K Entrepreneurship competition as a member of Aukera Therapeutics.

After HBS, Polli launched pymetrics, which harnessed advancements in cognitive science and machine learning to develop analytics-driven decision-making and performance enhancement software for the human capital sector. She holds multiple patents for the technology developed at pymetrics, which she co-founded in 2012 and led as CEO until her successful exit in 2022. Pymetrics was a World Economic Forum’s Technology Pioneer and Global Innovator, an Inc. 5000’s Fastest-Growing company, and Forbes Artificial Intelligence 50 company. Polli and pymetrics also played a pivotal role in passing the first-in-the-nation algorithmic bias law — New York’s Automated Employment Decision Tool law — which went into effect in July 2023.

Making her return to MIT as a visiting innovation scholar, Polli is collaborating closely with Sendhil Mullainathan, the Peter de Florez Professor in the departments of Electrical Engineering and Computer Science and Economics, and a principal investigator in the Laboratory for Information and Decision Systems. With Mullainathan, she is working to bring together a broad array of faculty, students, and postdocs across MIT to address concrete problems where humans and algorithms intersect, to develop a new subdomain of computer science specific to behavioral science, and to train the next generation of scientists to be bilingual in these two fields.

“Sometimes you get lucky, and sometimes you get unreasonably lucky. Frida has thrived in each of the facets we’re looking to have impact in — academia, civil society, and the marketplace. She combines a startup mentality with an abiding interest in positive social impact, while capable of ensuring the kind of intellectual rigor MIT demands. It’s an exceptionally rare combination, one we are unreasonably lucky to have,” says Mullainathan.

“People are increasingly interacting with algorithms, often with poor results, because most algorithms are not built with human interplay in mind,” says Polli. “We will focus on designing algorithms that will work synergistically with people. Only such algorithms can help us address large societal challenges in education, health care, poverty, et cetera.”

Polli was recognized as one of Inc.’s Top 100 Female Founders in 2019, followed by being named to Entrepreneur’s Top 100 Powerful Women in 2020, and to the 2024 list of 100 Brilliant Women in AI Ethics. Her work has been highlighted by major outlets including The New York Times, The Wall Street Journal, The Financial Times, The Economist, Fortune, Harvard Business Review, Fast Company, Bloomberg, and Inc.

Beyond her role at pymetrics, she founded Alethia AI in 2023, an organization focused on promoting transparency in technology, and in 2024, she launched Rosalind Ventures, dedicated to investing in women founders in science and health care. She is also an advisor at the Buck Institute’s Center for Healthy Aging in Women.

“I’m delighted to welcome Dr. Polli back to MIT. As a bilingual expert in both behavioral science and AI, she is a natural fit for the college. Her entrepreneurial background makes her a terrific inaugural visiting innovation scholar,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Need a research hypothesis? Ask AI.

Crafting a unique and promising research hypothesis is a fundamental skill for any scientist. It can also be time consuming: New PhD candidates might spend the first year of their program trying to decide exactly what to explore in their experiments. What if artificial intelligence could help?

MIT researchers have created a way to autonomously generate and evaluate promising research hypotheses across fields, through human-AI collaboration. In a new paper, they describe how they used this framework to create evidence-driven hypotheses that align with unmet research needs in the field of biologically inspired materials.

Published Wednesday in Advanced Materials, the study was co-authored by Alireza Ghafarollahi, a postdoc in the Laboratory for Atomistic and Molecular Mechanics (LAMM), and Markus Buehler, the Jerry McAfee Professor in Engineering in MIT’s departments of Civil and Environmental Engineering and of Mechanical Engineering and director of LAMM.

The framework, which the researchers call SciAgents, consists of multiple AI agents, each with specific capabilities and access to data, that leverage “graph reasoning” methods, where AI models utilize a knowledge graph that organizes and defines relationships between diverse scientific concepts. The multi-agent approach mimics the way biological systems organize themselves as groups of elementary building blocks. Buehler notes that this “divide and conquer” principle is a prominent paradigm in biology at many levels, from materials to swarms of insects to civilizations — all examples where the total intelligence is much greater than the sum of individuals’ abilities.

“By using multiple AI agents, we’re trying to simulate the process by which communities of scientists make discoveries,” says Buehler. “At MIT, we do that by having a bunch of people with different backgrounds working together and bumping into each other at coffee shops or in MIT’s Infinite Corridor. But that’s very coincidental and slow. Our quest is to simulate the process of discovery by exploring whether AI systems can be creative and make discoveries.”

Automating good ideas

As recent developments have demonstrated, large language models (LLMs) have shown an impressive ability to answer questions, summarize information, and execute simple tasks. But they are quite limited when it comes to generating new ideas from scratch. The MIT researchers wanted to design a system that enabled AI models to perform a more sophisticated, multistep process that goes beyond recalling information learned during training, to extrapolate and create new knowledge.

The foundation of their approach is an ontological knowledge graph, which organizes and makes connections between diverse scientific concepts. To make the graphs, the researchers feed a set of scientific papers into a generative AI model. In previous work, Buehler used a field of math known as category theory to help the AI model develop abstractions of scientific concepts as graphs, rooted in defining relationships between components, in a way that could be analyzed by other models through a process called graph reasoning. This focuses AI models on developing a more principled way to understand concepts; it also allows them to generalize better across domains.

“This is really important for us to create science-focused AI models, as scientific theories are typically rooted in generalizable principles rather than just knowledge recall,” Buehler says. “By focusing AI models on ‘thinking’ in such a manner, we can leapfrog beyond conventional methods and explore more creative uses of AI.”

For the most recent paper, the researchers used about 1,000 scientific studies on biological materials, but Buehler says the knowledge graphs could be generated using far more or fewer research papers from any field.

With the graph established, the researchers developed an AI system for scientific discovery, with multiple models specialized to play specific roles in the system. Most of the components were built off of OpenAI’s ChatGPT-4 series models and made use of a technique known as in-context learning, in which prompts provide contextual information about the model’s role in the system while allowing it to learn from data provided.

The individual agents in the framework interact with each other to collectively solve a complex problem that none of them would be able to do alone. The first task they are given is to generate the research hypothesis. The LLM interactions start after a subgraph has been defined from the knowledge graph, which can happen randomly or by manually entering a pair of keywords discussed in the papers.

In the framework, a language model the researchers named the “Ontologist” is tasked with defining scientific terms in the papers and examining the connections between them, fleshing out the knowledge graph. A model named “Scientist 1” then crafts a research proposal based on factors like its ability to uncover unexpected properties and novelty. The proposal includes a discussion of potential findings, the impact of the research, and a guess at the underlying mechanisms of action. A “Scientist 2” model expands on the idea, suggesting specific experimental and simulation approaches and making other improvements. Finally, a “Critic” model highlights its strengths and weaknesses and suggests further improvements.

“It’s about building a team of experts that are not all thinking the same way,” Buehler says. “They have to think differently and have different capabilities. The Critic agent is deliberately programmed to critique the others, so you don’t have everybody agreeing and saying it’s a great idea. You have an agent saying, ‘There’s a weakness here, can you explain it better?’ That makes the output much different from single models.”

Other agents in the system are able to search existing literature, which provides the system with a way to not only assess feasibility but also create and assess the novelty of each idea.

Making the system stronger

To validate their approach, Buehler and Ghafarollahi built a knowledge graph based on the words “silk” and “energy intensive.” Using the framework, the “Scientist 1” model proposed integrating silk with dandelion-based pigments to create biomaterials with enhanced optical and mechanical properties. The model predicted the material would be significantly stronger than traditional silk materials and require less energy to process.

Scientist 2 then made suggestions, such as using specific molecular dynamic simulation tools to explore how the proposed materials would interact, adding that a good application for the material would be a bioinspired adhesive. The Critic model then highlighted several strengths of the proposed material and areas for improvement, such as its scalability, long-term stability, and the environmental impacts of solvent use. To address those concerns, the Critic suggested conducting pilot studies for process validation and performing rigorous analyses of material durability.

The researchers also conducted other experiments with randomly chosen keywords, which produced various original hypotheses about more efficient biomimetic microfluidic chips, enhancing the mechanical properties of collagen-based scaffolds, and the interaction between graphene and amyloid fibrils to create bioelectronic devices.

“The system was able to come up with these new, rigorous ideas based on the path from the knowledge graph,” Ghafarollahi says. “In terms of novelty and applicability, the materials seemed robust and novel. In future work, we’re going to generate thousands, or tens of thousands, of new research ideas, and then we can categorize them, try to understand better how these materials are generated and how they could be improved further.”

Going forward, the researchers hope to incorporate new tools for retrieving information and running simulations into their frameworks. They can also easily swap out the foundation models in their frameworks for more advanced models, allowing the system to adapt with the latest innovations in AI.

“Because of the way these agents interact, an improvement in one model, even if it’s slight, has a huge impact on the overall behaviors and output of the system,” Buehler says.

Since releasing a preprint with open-source details of their approach, the researchers have been contacted by hundreds of people interested in using the frameworks in diverse scientific fields and even areas like finance and cybersecurity.

“There’s a lot of stuff you can do without having to go to the lab,” Buehler says. “You want to basically go to the lab at the very end of the process. The lab is expensive and takes a long time, so you want a system that can drill very deep into the best ideas, formulating the best hypotheses and accurately predicting emergent behaviors. Our vision is to make this easy to use, so you can use an app to bring in other ideas or drag in datasets to really challenge the model to make new discoveries.”

Surface-based sonar system could rapidly map the ocean floor at high resolution

On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.

Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)

A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering‘s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

Video thumbnail

Play video

Autonomous Sparse-Aperture Multibeam Echo Sounder
Video: MIT Lincoln Laboratory

“Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”

Underwater unknown

Oceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.

Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.

Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.

A solution surfaces

The Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.

However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.

Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.

“Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.

Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.

This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award. 

New autism research projects represent a broad range of approaches to achieving a shared goal

From studies of the connections between neurons to interactions between the nervous and immune systems to the complex ways in which people understand not just language, but also the unspoken nuances of conversation, new research projects at MIT supported by the Simons Center for the Social Brain are bringing a rich diversity of perspectives to advancing the field’s understanding of autism.

As six speakers lined up to describe their projects at a Simons Center symposium Nov. 15, MIT School of Science dean Nergis Mavalvala articulated what they were all striving for: “Ultimately, we want to seek understanding — not just the type that tells us how physiological differences in the inner workings of the brain produce differences in behavior and cognition, but also the kind of understanding that improves inclusion and quality of life for people living with autism spectrum disorders.”

Simons Center director Mriganka Sur, Newton Professor of Neuroscience in The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences (BCS), said that even though the field still lacks mechanism-based treatments or reliable biomarkers for autism spectrum disorders, he is optimistic about the discoveries and new research MIT has been able to contribute. MIT research has led to five clinical trials so far, and he praised the potential for future discovery, for instance in the projects showcased at the symposium.

“We are, I believe, at a frontier — at a moment where a lot of basic science is coming together with the vision that we could use that science for the betterment of people,” Sur said.

The Simons Center funds that basic science research in two main ways that each encourage collaboration, Sur said: large-scale projects led by faculty members across several labs, and fellowships for postdocs who are mentored by two faculty members, thereby bringing together two labs. The symposium featured talks and panel discussions by faculty and fellows leading new research.

In her remarks, Associate Professor Gloria Choi of The Picower Institute and BCS department described her collaboration’s efforts to explore the possibility of developing an autism therapy using the immune system. Previous research in mice by Choi and collaborator Jun Huh of Harvard Medical School has shown that injection of the immune system signaling molecule IL-17a into a particular region of the brain’s cortex can reduce neural hyperactivity and resulting differences in social and repetitive behaviors seen in autism model mice compared to non-autism models. Now Choi’s team is working on various ways to induce the immune system to target the cytokine to the brain by less invasive means than direct injection. One way under investigation, for example, is increasing the population of immune cells that produce IL-17a in the meningeal membranes that surround the brain.

Gloria Choi speaks at a lectern with an open laptop screen facing her.

Gloria Choi describes her team’s work to develop a potential immunotherapy for autism.

Photo: David Orenstein/Picower Institute


In a different vein, Associate Professor Ev Fedorenko of The McGovern Institute for Brain Research and BCS is leading a seven-lab collaboration aimed at understanding the cognitive and neural infrastructure that enables people to engage in conversation, which involves not only the language spoken but also facial expressions, tone of voice, and social context. Critical to this effort, she said, is going beyond previous work that studied each related brain area in isolation to understand the capability as a unified whole. A key insight, she said, is that they are all nearby each other in the lateral temporal cortex.

“Going beyond these individual components we can start asking big questions like, what are the broad organizing principles of this part of the brain?,” Fedorenko said. “Why does it have this particular arrangement of areas, and how do these work together to exchange information to create the unified percept of another individual we’re interacting with?”

While Choi and Fedorenko are looking at factors that account for differences in social behavior in autism, Picower Professor Earl K. Miller of The Picower Institute and BCS is leading a project that focuses on another phenomenon: the feeling of sensory overload that many autistic people experience. Research in Miller’s lab has shown that the brain’s ability to make predictions about sensory stimuli, which is critical to filtering out mundane signals so attention can be focused on new ones, depends on a cortex-wide coordination of the activity of millions of neurons implemented by high frequency “gamma” brain waves and lower-frequency “beta” waves. Working with animal models and human volunteers at Boston Children’s Hospital (BCH), Miller said his team is testing the idea that there may be a key difference in these brain wave dynamics in the autistic brain that could be addressed with closed-loop brain wave stimulation technology.

Simons postdoc Lukas Vogelsang, who is based in BCS Professor Pawan Sinha’s lab, is looking at potential differences in prediction between autistic and non-autistic individuals in a different way: through experiments with volunteers that aim to tease out how these differences are manifest in behavior. For instance, he’s finding that in at least one prediction task that requires participants to discern the probability of an event from provided cues, autistic people exhibit lower performance levels and undervalue the predictive significance of the cues, while non-autistic people slightly overvalue it. Vogelsang is co-advised by BCH researcher and Harvard Medical School Professor Charles Nelson.

Chhavi Sood, Lace Riggs, Lukas Vogelsang, and Micheael Segel sit in a line of chairs on a stage

Simons Center postdoc Chhavi Sood (with microphone) answers an audience question while fellow panelists Lace Riggs (left) and Lukas Vogelsang and moderator Micheael Segel of Harvard University (right), listen.

Photo: David Orenstein/Picower Institute


Fundamentally, the broad-scale behaviors that emerge from coordinated brain-wide neural activity begins with the molecular details of how neurons connect with each other at circuit junctions called synapses. In her research based in The Picower Institute lab of Menicon Professor Troy Littleton, Simons postdoc Chhavi Sood is using the genetically manipulable model of the fruit fly to investigate how mutations in the autism-associated protein FMRP may alter the expression of molecular gates regulating ion exchange at the synapse , which would in turn affect how frequently and strongly a pre-synaptic neuron excites a post-synaptic one. The differences she is investigating may be a molecular mechanism underlying neural hyperexcitability in fragile X syndrome, a profound autism spectrum disorder.

In her talk, Simons postdoc Lace Riggs, based in The McGovern Institute lab of Poitras Professor of Neuroscience Guoping Feng, emphasized how many autism-associated mutations in synaptic proteins promote pathological anxiety. She described her research that is aimed at discerning where in the brain’s neural circuitry that vulnerability might lie. In her ongoing work, Riggs is zeroing in on a novel thalamocortical circuit between the anteromedial nucleus of the thalamus and the cingulate cortex, which she found drives anxiogenic states. Riggs is co-supervised by Professor Fan Wang.

After the wide-ranging talks, supplemented by further discussion at the panels, the last word came via video conference from Kelsey Martin, executive vice president of the Simons Foundation Autism Research Initiative. Martin emphasized that fundamental research, like that done at the Simons Center, is the key to developing future therapies and other means of supporting members of the autism community.

“We believe so strongly that understanding the basic mechanisms of autism is critical to being able to develop translational and clinical approaches that are going to impact the lives of autistic individuals and their families,” she said.

From studies of synapses to circuits to behavior, MIT researchers and their collaborators are striving for exactly that impact.

Physicists magnetize a material with light

MIT physicists have created a new and long-lasting magnetic state in a material, using only light.

In a study appearing today in Nature, the researchers report using a terahertz laser — a light source that oscillates more than a trillion times per second — to directly stimulate atoms in an antiferromagnetic material. The laser’s oscillations are tuned to the natural vibrations among the material’s atoms, in a way that shifts the balance of atomic spins toward a new magnetic state.

The results provide a new way to control and switch antiferromagnetic materials, which are of interest for their potential to advance information processing and memory chip technology.

In common magnets, known as ferromagnets, the spins of atoms point in the same direction, in a way that the whole can be easily influenced and pulled in the direction of any external magnetic field. In contrast, antiferromagnets are composed of atoms with alternating spins, each pointing in the opposite direction from its neighbor. This up, down, up, down order essentially cancels the spins out, giving antiferromagnets a net zero magnetization that is impervious to any magnetic pull.

If a memory chip could be made from antiferromagnetic material, data could be “written” into microscopic regions of the material, called domains. A certain configuration of spin orientations (for example, up-down) in a given domain would represent the classical bit “0,” and a different configuration (down-up) would mean “1.” Data written on such a chip would be robust against outside magnetic influence.

For this and other reasons, scientists believe antiferromagnetic materials could be a more robust alternative to existing magnetic-based storage technologies. A major hurdle, however, has been in how to control antiferromagnets in a way that reliably switches the material from one magnetic state to another.

“Antiferromagnetic materials are robust and not influenced by unwanted stray magnetic fields,” says Nuh Gedik, the Donner Professor of Physics at MIT. “However, this robustness is a double-edged sword; their insensitivity to weak magnetic fields makes these materials difficult to control.”

Using carefully tuned terahertz light, the MIT team was able to controllably switch an antiferromagnet to a new magnetic state. Antiferromagnets could be incorporated into future memory chips that store and process more data while using less energy and taking up a fraction of the space of existing devices, owing to the stability of magnetic domains.

“Generally, such antiferromagnetic materials are not easy to control,” Gedik says. “Now we have some knobs to be able to tune and tweak them.”

Gedik is the senior author of the new study, which also includes MIT co-authors Batyr Ilyas, Tianchuang Luo, Alexander von Hoegen, Zhuquan Zhang, and Keith Nelson, along with collaborators at the Max Planck Institute for the Structure and Dynamics of Matter in Germany, University of the Basque Country in Spain, Seoul National University, and the Flatiron Institute in New York.

Off balance

Gedik’s group at MIT develops techniques to manipulate quantum materials in which interactions among atoms can give rise to exotic phenomena.

“In general, we excite materials with light to learn more about what holds them together fundamentally,” Gedik says. “For instance, why is this material an antiferromagnet, and is there a way to perturb microscopic interactions such that it turns into a ferromagnet?”

In their new study, the team worked with FePS3 — a material that transitions to an antiferromagnetic phase at a critical temperature of around 118 kelvins (-247 degrees Fahrenheit).

The team suspected they might control the material’s transition by tuning into its atomic vibrations.

“In any solid, you can picture it as different atoms that are periodically arranged, and between atoms are tiny springs,” von Hoegen explains. “If you were to pull one atom, it would vibrate at a characteristic frequency which typically occurs in the terahertz range.”

The way in which atoms vibrate also relates to how their spins interact with each other. The team reasoned that if they could stimulate the atoms with a terahertz source that oscillates at the same frequency as the atoms’ collective vibrations, called phonons, the effect could also nudge the atoms’ spins out of their perfectly balanced, magnetically alternating alignment. Once knocked out of balance, atoms should have larger spins in one direction than the other, creating a preferred orientation that would shift the inherently nonmagnetized material into a new magnetic state with finite magnetization.

“The idea is that you can kill two birds with one stone: You excite the atoms’ terahertz vibrations, which also couples to the spins,” Gedik says.

Shake and write

To test this idea, the team worked with a sample of FePS3 that was synthesized by colleages at Seoul National University. They placed the sample in a vacuum chamber and cooled it down to temperatures at and below 118 K. They then generated a terahertz pulse by aiming a beam of near-infrared light through an organic crystal, which transformed the light into the terahertz frequencies. They then directed this terahertz light toward the sample.

“This terahertz pulse is what we use to create a change in the sample,” Luo says. “It’s like ‘writing’ a new state into the sample.”

To confirm that the pulse triggered a change in the material’s magnetism, the team also aimed two near-infrared lasers at the sample, each with an opposite circular polarization. If the terahertz pulse had no effect, the researchers should see no difference in the intensity of the transmitted infrared lasers.

“Just seeing a difference tells us the material is no longer the original antiferromagnet, and that we are inducing a new magnetic state, by essentially using terahertz light to shake the atoms,” Ilyas says.

Over repeated experiments, the team observed that a terahertz pulse successfully switched the previously antiferromagnetic material to a new magnetic state — a transition that persisted for a surprisingly long time, over several milliseconds, even after the laser was turned off.

“People have seen these light-induced phase transitions before in other systems, but typically they live for very short times on the order of a picosecond, which is a trillionth of a second,” Gedik says.

In just a few milliseconds, scientists now might have a decent window of time during which they could probe the properties of the temporary new state before it settles back into its inherent antiferromagnetism. Then, they might be able to identify new knobs to tweak antiferromagnets and optimize their use in next-generation memory storage technologies.

This research was supported, in part, by the U.S. Department of Energy, Materials Science and Engineering Division, Office of Basic Energy Sciences, and the Gordon and Betty Moore Foundation. 

MIT engineers grow “high-rise” 3D chips

The electronics industry is approaching a limit to the number of transistors that can be packed onto the surface of a computer chip. So, chip manufacturers are looking to build up rather than out.

Instead of squeezing ever-smaller transistors onto a single surface, the industry is aiming to stack multiple surfaces of transistors and semiconducting elements — akin to turning a ranch house into a high-rise. Such multilayered chips could handle exponentially more data and carry out many more complex functions than today’s electronics.

A significant hurdle, however, is the platform on which chips are built. Today, bulky silicon wafers serve as the main scaffold on which high-quality, single-crystalline semiconducting elements are grown. Any stackable chip would have to include thick silicon “flooring” as part of each layer, slowing down any communication between functional semiconducting layers.

Now, MIT engineers have found a way around this hurdle, with a multilayered chip design that doesn’t require any silicon wafer substrates and works at temperatures low enough to preserve the underlying layer’s circuitry.

In a study appearing today in the journal Nature, the team reports using the new method to fabricate a multilayered chip with alternating layers of high-quality semiconducting material grown directly on top of each other.

The method enables engineers to build high-performance transistors and memory and logic elements on any random crystalline surface — not just on the bulky crystal scaffold of silicon wafers. Without these thick silicon substrates, multiple semiconducting layers can be in more direct contact, leading to better and faster communication and computation between layers, the researchers say.

The researchers envision that the method could be used to build AI hardware, in the form of stacked chips for laptops or wearable devices, that would be as fast and powerful as today’s supercomputers and could store huge amounts of data on par with physical data centers.

“This breakthrough opens up enormous potential for the semiconductor industry, allowing chips to be stacked without traditional limitations,” says study author Jeehwan Kim, associate professor of mechanical engineering at MIT. “This could lead to orders-of-magnitude improvements in computing power for applications in AI, logic, and memory.”

The study’s MIT co-authors include first author Ki Seok Kim, Seunghwan Seo, Doyoon Lee, Jung-El Ryu, Jekyung Kim, Jun Min Suh, June-chul Shin, Min-Kyu Song, Jin Feng, and Sangho Lee, along with collaborators from Samsung Advanced Institute of Technology, Sungkyunkwan University in South Korea, and the University of Texas at Dallas.

Seed pockets

In 2023, Kim’s group reported that they developed a method to grow high-quality semiconducting materials on amorphous surfaces, similar to the diverse topography of semiconducting circuitry on finished chips. The material that they grew was a type of 2D material known as transition-metal dichalcogenides, or TMDs, considered a promising successor to silicon for fabricating smaller, high-performance transistors. Such 2D materials can maintain their semiconducting properties even at scales as small as a single atom, whereas silicon’s performance sharply degrades.

In their previous work, the team grew TMDs on silicon wafers with amorphous coatings, as well as over existing TMDs. To encourage atoms to arrange themselves into high-quality single-crystalline form, rather than in random, polycrystalline disorder, Kim and his colleagues first covered a silicon wafer in a very thin film, or “mask” of silicon dioxide, which they patterned with tiny openings, or pockets. They then flowed a gas of atoms over the mask and found that atoms settled into the pockets as “seeds.” The pockets confined the seeds to grow in regular, single-crystalline patterns.

But at the time, the method only worked at around 900 degrees Celsius.

“You have to grow this single-crystalline material below 400 Celsius, otherwise the underlying circuitry is completely cooked and ruined,” Kim says. “So, our homework was, we had to do a similar technique at temperatures lower than 400 Celsius. If we could do that, the impact would be substantial.”

Building up

In their new work, Kim and his colleagues looked to fine-tune their method in order to grow single-crystalline 2D materials at temperatures low enough to preserve any underlying circuitry. They found a surprisingly simple solution in metallurgy — the science and craft of metal production. When metallurgists pour molten metal into a mold, the liquid slowly “nucleates,” or forms grains that grow and merge into a regularly patterned crystal that hardens into solid form. Metallurgists have found that this nucleation occurs most readily at the edges of a mold into which liquid metal is poured.

“It’s known that nucleating at the edges requires less energy — and heat,” Kim says. “So we borrowed this concept from metallurgy to utilize for future AI hardware.”

The team looked to grow single-crystalline TMDs on a silicon wafer that already has been fabricated with transistor circuitry. They first covered the circuitry with a mask of silicon dioxide, just as in their previous work. They then deposited “seeds” of TMD at the edges of each of the mask’s pockets and found that these edge seeds grew into single-crystalline material at temperatures as low as 380 degrees Celsius, compared to seeds that started growing in the center, away from the edges of each pocket, which required higher temperatures to form single-crystalline material.

Going a step further, the researchers used the new method to fabricate a multilayered chip with alternating layers of two different TMDs — molybdenum disulfide, a promising material candidate for fabricating n-type transistors; and tungsten diselenide, a material that has potential for being made into p-type transistors. Both p- and n-type transistors are the electronic building blocks for carrying out any logic operation. The team was able to grow both materials in single-crystalline form, directly on top of each other, without requiring any intermediate silicon wafers. Kim says the method will effectively double the density of a chip’s semiconducting elements, and particularly, metal-oxide semiconductor (CMOS), which is a basic building block of a modern logic circuitry.

“A product realized by our technique is not only a 3D logic chip but also 3D memory and their combinations,” Kim says. “With our growth-based monolithic 3D method, you could grow tens to hundreds of logic and memory layers, right on top of each other, and they would be able to communicate very well.”

“Conventional 3D chips have been fabricated with silicon wafers in-between, by drilling holes through the wafer — a process which limits the number of stacked layers, vertical alignment resolution, and yields,” first author Kiseok Kim adds. “Our growth-based method addresses all of those issues at once.” 

To commercialize their stackable chip design further, Kim has recently spun off a company, FS2 (Future Semiconductor 2D materials).

“We so far show a concept at a small-scale device arrays,” he says. “The next step is scaling up to show professional AI chip operation.”

This research is supported, in part, by Samsung Advanced Institute of Technology and the U.S. Air Force Office of Scientific Research. 

How humans continuously adapt while walking stably

Researchers have developed a model that explains how humans adapt continuously during complex tasks, like walking, while remaining stable.

The findings were detailed in a recent paper published in the journal Nature Communications authored by Nidhi Seethapathi, an assistant professor in MIT’s Department of Brain and Cognitive Sciences; Barrett C. Clark, a robotics software engineer at Bright Minds Inc.; and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University.

In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment  more complex.

“Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment,” Seethapathi says. “This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings.”

To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and  developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.

The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing  an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.

The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.

“Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control,” Seethapathi says. “You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model  could help speed up the process by narrowing the search space.”

Turning adversity into opportunity

Sujood Eldouma always knew she loved math; she just didn’t know how to use it for good in the world. 

But after a personal and educational journey that took her from Sudan to Cairo to London, all while leveraging MIT Open Learning’s online educational resources, she finally knows the answer: data science.

An early love of data

Eldouma grew up in Omdurman, Sudan, with her parents and siblings. She always had an affinity for STEM subjects, and at the University of Khartoum she majored in electrical and electronic engineering with a focus in control and instrumentation engineering.

In her second year at university, Eldouma struggled with her first coding courses in C++ and C#, which are general-purpose programming languages. When a teaching assistant introduced Eldouma and her classmates to MIT OpenCourseWare for additional support, she promptly worked through OpenCourseWare’s C++ and C courses in tandem with her in-person classes. This began Eldouma’s ongoing connection with the open educational resources available through MIT Open Learning.

OpenCourseWare, part of MIT Open Learning, offers a free collection of materials from thousands of MIT courses, spanning the entire curriculum. To date, Eldouma has explored over 20 OpenCourseWare courses, and she says it is a resource she returns to regularly.

Video thumbnail

Play video

Sujood from Sudan: An Open Learner’s Story
Video: MIT OpenCourseWare

“We started watching the videos and reading the materials, and it made our lives easier,” says Eldouma. “I took many OpenCourseWare courses in parallel with my classes throughout my undergrad, because we still did the same material. OpenCourseWare courses are structured differently and have different resources and textbooks, but at the end of the day it’s the same content.”

For her graduation thesis, Eldouma did a project on disaster response and management in complex contexts, because at the time, Sudan was suffering from heavy floods and the country had limited resources to respond.

“That’s when I realized I really love data, and I wanted to explore that more,” she says.

While Eldouma loves math, she always wanted to find ways to use it for good. Through the early exposure to data science and statistical methods at her university, she saw how data science leverages math for real-world impact.

After graduation, she took a job at the DAL Group, the largest Sudanese conglomerate, where she helped to incorporate data science and new technologies to automate processes within the company. When civil war erupted in Sudan in April 2023, life as Eldouma knew it was turned upside down, and her family was forced to make the difficult choice to relocate to Egypt.

Purpose in adversity

Soon after relocating to Egypt, Eldouma lost her job and found herself struggling to find purpose in the life circumstances she had been handed. Due to visa restrictions, challenges getting right-to-work permits, and a complicated employment market in Egypt, she was also unable to find a new job.

“I was sort of in a depressive episode, because of all that was happening,” she reflects. “It just hit me that I lost everything that I know, everything that I love. I’m in a new country. I need to start from scratch.”

Around this time, a friend who knew Eldouma was curious about data science sent her the link to apply to the MIT Emerging Talent Certificate in Data and Computer Science. With less than 24 hours before the application deadline, Eldouma hit “Submit.”

Finding community and joy through learning

Part of MIT Open Learning, MIT Emerging Talent at the MIT Jameel World Education Lab (J-WEL) develops global education programs that target the needs of talented individuals from challenging economic and social circumstances by equipping them with the knowledge and tools to advance their education and careers.

The Certificate in Computer and Data Science is a year-long online learning program that follows an agile continuous education model. It incorporates computer science and data analysis coursework from MITx, professional skill building, experiential learning, apprenticeship options, and opportunities for networking with MIT’s global community. The program is targeted toward refugees, migrants, and first-generation low-income students from historically marginalized backgrounds and underserved communities worldwide.

Although Eldouma had used data science in her role at the DAL Group, she was happy to have a proper introduction to the field and to find joy in learning again. She also found community, support, and inspiration from her classmates who were connected to each other not just by their academic pursuits, but by their shared life challenges. The cohort of 100 students stayed in close contact through the program, both for casual conversation and for group work.

“In the final step of the Emerging Talent program, learners apply their computer and data knowledge in an experiential learning opportunity,” says Megan Mitchell, associate director for Pathways for Talent and acting director of J-WEL. “The experiential learning opportunity takes the form of an internship, apprenticeship, or an independent or collaborative project, and allows students to apply their knowledge in real-world settings and build practical skills.”

Determined to apply her newly acquired knowledge in a meaningful way, Eldouma and fellow displaced Sudanese classmates designed a project to help solve a problem in their home country. The group identified access to education as a major problem facing Sudanese people, with schooling disrupted due to the conflict. Focusing on the higher education audience, the group partnered with community platform Nas Al Sudan to create a centralized database where students can search for scholarships and other opportunities to continue their education.

Eldouma completed the MIT Emerging Talent program in June 2024 with a clear vision to pursue a career in data science, and the confidence to achieve that goal. In fact, she had already taken the steps to get there: halfway through the certificate program, she applied and was accepted to the MITx MicroMasters program in Statistics and Data Science at Open Learning and the London School of Economics (LSE) Masters of Science in Data Science.

In January 2024, Eldouma started the MicroMasters program with 12 of her Emerging Talent peers. While the MIT Emerging Talent program is focused on undergraduate-level, introductory computer and data science material, the MicroMasters program in Statistics and Data Science is graduate-level learning. MicroMasters programs are a series of courses that provide deep learning in a specific career field, and learners that successfully earn the credential may receive academic credit to universities around the world. This makes the credential a pathway to over 50 master’s degree programs and other advanced degrees, including at MIT. Eldouma believes that her experience in the MicroMasters courses prepared her well for the expectations of the LSE program.

After finishing the MicroMasters and LSE programs, Eldouma aspires to a career using data science to better understand what is happening on the African continent from an economic and social point of view. She hopes to contribute to solutions to conflicts across the region. And, someday, she hopes to move back to Sudan.

“My family’s roots are there. I have memories there,” she says. “I miss walking in the street and the background noise is the same language that I am thinking in. I don’t think I will ever find that in any place like Sudan.”

Miracle, or marginal gain?

From 1960 to 1989, South Korea experienced a famous economic boom, with real GDP per capita growing by an annual average of 6.82 percent. Many observers have attributed this to industrial policy, the practice of giving government support to specific industrial sectors. In this case, industrial policy is often thought to have powered a generation of growth.

Did it, though? An innovative study by four scholars, including two MIT economists, suggests that overall GDP growth attributable to industrial policy is relatively limited. Using global trade data to evaluate changes in industrial capacity within countries, the research finds that industrial policy raises long-run GDP by only 1.08 percent in generally favorable circumstances, and up to 4.06 percent if additional factors are aligned — a distinctly smaller gain than an annually compounding rate of 6.82 percent.

The study is meaningful not just because of the bottom-line numbers, but for the reasons behind them. The research indicates, for instance, that local consumer demand can curb the impact of industrial policy. Even when a country alters its output, demand for those goods may not shift as extensively, putting a ceiling on directed growth.

“In most cases, the gains are not going to be enormous,” says MIT economist Arnaud Costinot, co-author of a new paper detailing the research. “They are there, but in terms of magnitude, the gains are nowhere near the full scope of the South Korean experience, which is the poster child for an industrial policy success story.”

The research combines empirical data and economic theory, using data to assess “textbook” conditions where industrial policy would seem most merited.

“Many think that, for countries like China, Japan, and other East Asian giants, and perhaps even the U.S., some form of industrial policy played a big role in their success stories,” says Dave Donaldson, an MIT economist and another co-author of the paper. “The question is whether the textbook argument for industrial policy fully explains those successes, and our punchline would be, no, we don’t think it can.”

The paper, “The Textbook Case for Industrial Policy: Theory Meets Data,” appears in the Journal of Political Economy. The authors are Dominick Bartelme, an independent researcher; Costinot, the Ford Professor of Economics in MIT’s Department of Economics; Donaldson, the Class of 1949 Professor of Economics in MIT’s Department of Economics; and Andres Rodriguez-Clare, the Edward G. and Nancy S. Jordan Professor of Economics at the University of California at Berkeley.

Reverse-engineering new insights

Opponents of industrial policy have long advocated for a more market-centered approach to economics. And yet, over the last several decades globally, even where political leaders publicly back a laissez-faire approach, many governments have still found reasons to support particular industries. Beyond that, people have long cited East Asia’s economic rise as a point in favor of industrial policy.

The scholars say the “textbook case” for industrial policy is a scenario where some economic sectors are subject to external economies of scale but others are not.

That means firms within an industry have an external effect on the productivity of other firms in that same industry, which could happen via the spread of knowledge.

If an industry becomes both bigger and more productive, it may make cheaper goods that can be exported more competitively. The study is based on the insight that global trade statistics can tell us something important about the changes in industry-specific capacities within countries. That — combined with other metrics about national economies — allows the economists to scrutinize the overall gains deriving from those changes and to assess the possible scope of industrial policies.

As Donaldson explains, “An empirical lever here is to ask: If something makes a country’s sectors bigger, do they look more productive? If so, they would start exporting more to other countries. We reverse-engineer that.”

Costinot adds: “We are using that idea that if productivity is going up, that should be reflected in export patterns. The smoking gun for the existence of scale effects is that larger domestic markets go hand in hand with more exports.”

Ultimately, the scholars analyzed data for 61 countries at different points in time over the last few decades, with exports for 15 manufacturing sectors included. The figure of 1.08 percent long-run GDP gains is an average, with countries realizing gains ranging from 0.59 percent to 2.06 percent annually under favorable conditions. Smaller countries that are open to trade may realize larger proportional effects as well.

“We’re doing this global analysis and trying to be right on average,” Donaldson says. “It’s possible there are larger gains from industrial policy in particular settings.”

The study also suggests countries have greater room to redirect economic activity, based on varying levels of productivity among industries, than they can realistically enact due to relatively fixed demand. The paper estimates that if countries could fully reallocate workers to the industry with the largest room to grow, long-run welfare gains would be as high as 12.4 percent.

But that never happens. Suppose a country’s industrial policy helped one sector double in size while becoming 20 percent more productive. In theory, the government should continue to back that industry. In reality, growth would slow as markets became saturated.

“That would be a pretty big scale effect,” Donaldson says. “But notice that in doubling the size of an industry, many forces would push back. Maybe consumers don’t want to consume twice as many manufactured goods. Just because there are large spillovers in productivity doesn’t mean optimally designed industrial policy has huge effects. It has to be in a world where people want those goods.”

Place-based policy

Costinot and Donaldson both emphasize that this study does not address all the possible factors that can be weighed either in favor of industrial policy or against it. Some governments might favor industrial policy as a way of evening out wage distributions and wealth inequality, fixing other market failures such as environmental damages or furthering strategic geopolitical goals. In the U.S., industrial policy has sometimes been viewed as a way of revitalizing recently deindustrialized areas while reskilling workers.

In charting the limits on industrial policy stemming from fairly fixed demand, the study touches on still bigger issues concerning global demand and restrictions on growth of any kind. Without increasing demand, enterprise of all kinds encounters size limits.

The outcome of the paper, in any case, is not necessarily a final conclusion about industrial policy, but deeper insight into its dynamics. As the authors note, the findings leave open the possibility that targeted interventions in specific sectors and specific regions could be very beneficial, when policy and trade conditions are right. Policymakers should grasp the amount of growth likely to result, however.

As Costinot notes, “The conclusion is not that there is no potential gain from industrial policy, but just that the textbook case doesn’t seem to be there.” At least, not to the extent some have assumed.

The research was supported, in part, by the U.S. National Science Foundation.