Enhancing LLM collaboration for smarter, more efficient solutions

Ever been asked a question you only knew part of the answer to? To give a more informed response, your best move would be to phone a friend with more knowledge on the subject.

This collaborative process can also help large language models (LLMs) improve their accuracy. Still, it’s been difficult to teach LLMs to recognize when they should collaborate with another model on an answer. Instead of using complex formulas or large amounts of labeled data to spell out where models should work together, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have envisioned a more organic approach.

Their new algorithm, called “Co-LLM,” can pair a general-purpose base LLM with a more specialized model and help them work together. As the former crafts an answer, Co-LLM reviews each word (or token) within its response to see where it can call upon a more accurate answer from the expert model. This process leads to more accurate replies to things like medical prompts and math and reasoning problems. Since the expert model is not needed at each iteration, this also leads to more efficient response generation.

To decide when a base model needs help from an expert model, the framework uses machine learning to train a “switch variable,” or a tool that can indicate the competence of each word within the two LLMs’ responses. The switch is like a project manager, finding areas where it should call in a specialist. If you asked Co-LLM to name some examples of extinct bear species, for instance, two models would draft answers together. The general-purpose LLM begins to put together a reply, with the switch variable intervening at the parts where it can slot in a better token from the expert model, such as adding the year when the bear species became extinct.

“With Co-LLM, we’re essentially training a general-purpose LLM to ‘phone’ an expert model when needed,” says Shannon Shen, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate who’s a lead author on a new paper about the approach. “We use domain-specific data to teach the base model about its counterpart’s expertise in areas like biomedical tasks and math and reasoning questions. This process automatically finds the parts of the data that are hard for the base model to generate, and then it instructs the base model to switch to the expert LLM, which was pretrained on data from a similar field. The general-purpose model provides the ‘scaffolding’ generation, and when it calls on the specialized LLM, it prompts the expert to generate the desired tokens. Our findings indicate that the LLMs learn patterns of collaboration organically, resembling how humans recognize when to call upon an expert to fill in the blanks.”

A combination of flexibility and factuality

Imagine asking a general-purpose LLM to name the ingredients of a specific prescription drug. It may reply incorrectly, necessitating the expertise of a specialized model.

To showcase Co-LLM’s flexibility, the researchers used data like the BioASQ medical set to couple a base LLM with expert LLMs in different domains, like the Meditron model, which is pretrained on unlabeled medical data. This enabled the algorithm to help answer inquiries a biomedical expert would typically receive, such as naming the mechanisms causing a particular disease.

For example, if you asked a simple LLM alone to name the ingredients of a specific prescription drug, it may reply incorrectly. With the added expertise of a model that specializes in biomedical data, you’d get a more accurate answer. Co-LLM also alerts users where to double-check answers.

Another example of Co-LLM’s performance boost: When tasked with solving a math problem like “a3 · a2 if a=5,” the general-purpose model incorrectly calculated the answer to be 125. As Co-LLM trained the model to collaborate more with a large math LLM called Llemma, together they determined that the correct solution was 3,125.

Co-LLM gave more accurate replies than fine-tuned simple LLMs and untuned specialized models working independently. Co-LLM can guide two models that were trained differently to work together, whereas other effective LLM collaboration approaches, such as “Proxy Tuning,” need all of their component models to be trained similarly. Additionally, this baseline requires each model to be used simultaneously to produce the answer, whereas MIT’s algorithm simply activates its expert model for particular tokens, leading to more efficient generation.

When to ask the expert

The MIT researchers’ algorithm highlights that imitating human teamwork more closely can increase accuracy in multi-LLM collaboration. To further elevate its factual precision, the team may draw from human self-correction: They’re considering a more robust deferral approach that can backtrack when the expert model doesn’t give a correct response. This upgrade would allow Co-LLM to course-correct so the algorithm can still give a satisfactory reply.

The team would also like to update the expert model (via only training the base model) when new information is available, keeping answers as current as possible. This would allow Co-LLM to pair the most up-to-date information with strong reasoning power. Eventually, the model could assist with enterprise documents, using the latest information it has to update them accordingly. Co-LLM could also train small, private models to work with a more powerful LLM to improve documents that must remain within the server.

“Co-LLM presents an interesting approach for learning to choose between two models to improve efficiency and performance,” says Colin Raffel, associate professor at the University of Toronto and an associate research director at the Vector Institute, who wasn’t involved in the research. “Since routing decisions are made at the token-level, Co-LLM provides a granular way of deferring difficult generation steps to a more powerful model. The unique combination of model-token-level routing also provides a great deal of flexibility that similar methods lack. Co-LLM contributes to an important line of work that aims to develop ecosystems of specialized models to outperform expensive monolithic AI systems.”

Shen wrote the paper with four other CSAIL affiliates: PhD student Hunter Lang ’17, MEng ’18; former postdoc and Apple AI/ML researcher Bailin Wang; MIT assistant professor of electrical engineering and computer science Yoon Kim, and professor and Jameel Clinic member David Sontag PhD ’10, who are both part of MIT-IBM Watson AI Lab. Their research was supported, in part, by the National Science Foundation, The National Defense Science and Engineering Graduate (NDSEG) Fellowship, MIT-IBM Watson AI Lab, and Amazon. Their work was presented at the Annual Meeting of the Association for Computational Linguistics.

Affordable high-tech windows for comfort and energy savings

Imagine if the windows of your home didn’t transmit heat. They’d keep the heat indoors in winter and outdoors on a hot summer’s day. Your heating and cooling bills would go down; your energy consumption and carbon emissions would drop; and you’d still be comfortable all year ’round.

AeroShield, a startup spun out of MIT, is poised to start manufacturing such windows. Building operations make up 36 percent of global carbon dioxide emissions, and today’s windows are a major contributor to energy inefficiency in buildings. To improve building efficiency, AeroShield has developed a window technology that promises to reduce heat loss by up to 65 percent, significantly reducing energy use and carbon emissions in buildings, and the company just announced the opening of a new facility to manufacture its breakthrough energy-efficient windows.

“Our mission is to decarbonize the built environment,” says Elise Strobach SM ’17, PhD ’20, co-founder and CEO of AeroShield. “The availability of affordable, thermally insulating windows will help us achieve that goal while also reducing homeowner’s heating and cooling bills.” According to the U.S. Department of Energy, for most homeowners, 30 percent of that bill results from window inefficiencies.

Technology development at MIT

Research on AeroShield’s window technology began a decade ago in the MIT lab of Evelyn Wang, Ford Professor of Engineering, now on leave to serve as director of the Advanced Research Projects Agency-Energy (ARPA-E). In late 2014, the MIT team received funding from ARPA-E, and other sponsors followed, including the MIT Energy Initiative through the MIT Tata Center for Technology and Design in 2016.

The work focused on aerogels, remarkable materials that are ultra-porous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow. Aerogels were invented in the 1930s and used by NASA and others as thermal insulation. The team at MIT saw the potential for incorporating aerogel sheets into windows to keep heat from escaping or entering buildings. But there was one problem: Nobody had been able to make aerogels transparent.

An aerogel is made of transparent, loosely connected nanoscale silica particles and is 95 percent air. But an aerogel sheet isn’t transparent because light traveling through it gets scattered by the silica particles.

After five years of theoretical and experimental work, the MIT team determined that the key to transparency was having the silica particles both small and uniform in size. This allows light to pass directly through, so the aerogel becomes transparent. Indeed, as long as the particle size is small and uniform, increasing the thickness of an aerogel sheet to achieve greater thermal insulation won’t make it less clear.

Teams in the MIT lab looked at various applications for their super-insulating, transparent aerogels. Some focused on improving solar thermal collectors by making the systems more efficient and less expensive. But to Strobach, increasing the thermal efficiency of windows looked especially promising and potentially significant as a means of reducing climate change.

The researchers determined that aerogel sheets could be inserted into the gap in double-pane windows, making them more than twice as insulating. The windows could then be manufactured on existing production lines with minor changes, and the resulting windows would be affordable and as wide-ranging in style as the window options available today. Best of all, once purchased and installed, the windows would reduce electricity bills, energy use, and carbon emissions.

The impact on energy use in buildings could be considerable. “If we only consider winter, windows in the United States lose enough energy to power over 50 million homes,” says Strobach. “That wasted energy generates about 350 million tons of carbon dioxide — more than is emitted by 76 million cars.” Super-insulating windows could help home and building owners reduce carbon dioxide emissions by gigatons while saving billions in heating and cooling costs.

The AeroShield story

In 2019, Strobach and her MIT colleagues — Aaron Baskerville-Bridges MBA ’20, SM ’20 and Kyle Wilke PhD ’19 — co-founded AeroShield to further develop and commercialize their aerogel-based technology for windows and other applications. And in the subsequent five years, their hard work has attracted attention, recently leading to two major accomplishments.

In spring 2024, the company announced the opening of its new pilot manufacturing facility in Waltham, Massachusetts, where the team will be producing, testing, and certifying their first full-size windows and patio doors for initial product launch. The 12,000 square foot facility will significantly expand the company’s capabilities, with cutting-edge aerogel R&D labs, manufacturing equipment, assembly lines, and testing equipment. Says Strobach, “Our pilot facility will supply window and door manufacturers as we launch our first products and will also serve as our R&D headquarters as we develop the next generation of energy-efficient products using transparent aerogels.”

Also in spring 2024, AeroShield received a $14.5 million award from ARPA-E’s “Seeding Critical Advances for Leading Energy technologies with Untapped Potential” (SCALEUP) program, which provides new funding to previous ARPA-E awardees that have “demonstrated a viable path to market.” That funding will enable the company to expand its production capacity to tens of thousands, or even hundreds of thousands, of units per year.

Strobach also cites two less-obvious benefits of the SCALEUP award.

First, the funding is enabling the company to move more quickly on the scale-up phase of their technology development. “We know from our fundamental studies and lab experiments that we can make large-area aerogel sheets that could go in an entry or patio door,” says Elise. “The SCALEUP award allows us to go straight for that vision. We don’t have to do all the incremental sizes of aerogels to prove that we can make a big one. The award provides capital for us to buy the big equipment to make the big aerogel.”

Second, the SCALEUP award confirms the viability of the company to other potential investors and collaborators. Indeed, AeroShield recently announced $5 million of additional funding from existing investors Massachusetts Clean Energy Center and MassVentures, as well as new investor MassMutual Ventures. Strobach notes that the company now has investor, engineering, and customer partners.

She stresses the importance of partners in achieving AeroShield’s mission. “We know that what we’ve got from a fundamental perspective can change the industry,” she says. “Now we want to go out and do it. With the right partners and at the right pace, we may actually be able to increase the energy efficiency of our buildings early enough to help make a real dent in climate change.”

Finding some stability in adaptable brains

One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute for Brain Research. In the Aug. 27 issue of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Postdoc Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells — a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region — formally known as the apical oblique dendrite domain — differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high-fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context-sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger. “Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” 

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize — further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability; they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: when an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.

A new way to reprogram immune cells and direct them toward anti-tumor immunity

A collaboration between four MIT groups, led by principal investigators Laura L. KiesslingJeremiah A. JohnsonAlex K. Shalek, and Darrell J. Irvine, in conjunction with a group at Georgia Tech led by M.G. Finn, has revealed a new strategy for enabling immune system mobilization against cancer cells. The work, which appears today in ACS Nano, produces exactly the type of anti-tumor immunity needed to function as a tumor vaccine — both prophylactically and therapeutically.

Cancer cells can look very similar to the human cells from which they are derived. In contrast, viruses, bacteria, and fungi carry carbohydrates on their surfaces that are markedly different from those of human carbohydrates. Dendritic cells — the immune system’s best antigen-presenting cells — carry proteins on their surfaces that help them recognize these atypical carbohydrates and bring those antigens inside of them. The antigens are then processed into smaller peptides and presented to the immune system for a response. Intriguingly, some of these carbohydrate proteins can also collaborate to direct immune responses. This work presents a strategy for targeting those antigens to the dendritic cells that results in a more activated, stronger immune response.

Tackling tumors’ tenacity

The researchers’ new strategy shrouds the tumor antigens with foreign carbohydrates and co-delivers them with single-stranded RNA so that the dendritic cells can be programmed to recognize the tumor antigens as a potential threat. The researchers targeted the lectin (carbohydrate-binding protein) DC-SIGN because of its ability to serve as an activator of dendritic cell immunity. They decorated a virus-like particle (a particle composed of virus proteins assembled onto a piece of RNA that is noninfectious because its internal RNA is not from the virus) with DC-binding carbohydrate derivatives. The resulting glycan-costumed virus-like particles display unique sugars; therefore, the dendritic cells recognize them as something they need to attack.

“On the surface of the dendritic cells are carbohydrate binding proteins called lectins that combine to the sugars on the surface of bacteria or viruses, and when they do that they penetrate the membrane,” explains Kiessling, the paper’s senior author. “On the cell, the DC-SIGN gets clustered upon binding the virus or bacteria and that promotes internalization. When a virus-like particle gets internalized, it starts to fall apart and releases its RNA.” The toll-like receptor (bound to RNA) and DC-SIGN (bound to the sugar decoration) can both signal to activate the immune response.

Once the dendritic cells have sounded the alarm of a foreign invasion, a robust immune response is triggered that is significantly stronger than the immune response that would be expected with a typical untargeted vaccine. When an antigen is encountered by the dendritic cells, they send signals to T cells, the next cell in the immune system, to give different responses depending on what pathways have been activated in the dendritic cells.

Advancing cancer vaccine development

The activity of a potential vaccine developed in line with this new research is twofold. First, the vaccine glycan coat binds to lectins, providing a primary signal. Then, binding to toll-like receptors elicits potent immune activation.

The Kiessling, Finn, and Johnson groups had previously identified a synthetic DC-SIGN binding group that directed cellular immune responses when used to decorate virus-like particles. But it was unclear whether this method could be utilized as an anticancer vaccine. Collaboration between researchers in the labs at MIT and Georgia Tech demonstrated that in fact, it could.

Valerie Lensch, a chemistry PhD student from MIT’s Program in Polymers and Soft Matter and a joint member of the Kiessling and Johnson labs, took the preexisting strategy and tested it as an anticancer vaccine, learning a great deal about immunology in order to do so.

“We have developed a modular vaccine platform designed to drive antigen-specific cellular immune responses,” says Lensch. “This platform is not only pivotal in the fight against cancer, but also offers significant potential for combating challenging intracellular pathogens, including malaria parasites, HIV, and Mycobacterium tuberculosis. This technology holds promise for tackling a range of diseases where vaccine development has been particularly challenging.”

Lensch and her fellow researchers conducted in vitro experiments with extensive iterations of these glycan-costumed virus-like particles before identifying a design that demonstrated potential for success. Once that was achieved, the researchers were able to move on to an in vivo model, an exciting milestone for their research.

Adele Gabba, a postdoc in the Kiessling Lab, conducted the in vivo experiments with Lensch, and Robert Hincapie, who conducted his PhD studies with Professor M.G. Finn at Georgia Tech, built and decorated the virus-like particles with a series of glycans that were sent to him from the researchers at MIT.

“We are discovering that carbohydrates act like a language that cells use to communicate and direct the immune system,” says Gabba. “It’s thrilling that we have begun to decode this language and can now harness it to reshape immune responses.”

“The design principles behind this vaccine are rooted in extensive fundamental research conducted by previous graduate student and postdoctoral researchers over many years, focusing on optimizing lectin engagement and understanding the roles of lectins in immunity,” says Lensch. “It has been exciting to witness the translation of these concepts into therapeutic platforms across various applications.”

Protecting the rights of internet users, in Mexico and worldwide

After the Arab Spring and the Occupy movement, a single Tweet or Facebook post was able to mobilize thousands in a matter of hours. In 2012, protests came to the streets of Mexico as young people demonstrated against the results of the general election.

A recent college graduate of the National Autonomous University of Mexico, Mariel García-Montes had classmates who were nonviolently participating in the protests. One was arrested and jailed, and as García-Montes pored over online surveillance videos and photos to help free her, she was struck by the power of the tools at her disposal.

“Videos and maps and photographs placed her at a different location at the time that her arraignment said,” García-Montes says. “When she was able to walk out of jail partly because of technological evidence, I thought, ‘Maybe this is a window of opportunity to use technology for social good.’”

Over a decade later, García-Montes is still looking for more of those windows. She first came to MIT in 2016 to pursue a master’s degree in comparative media studies and is currently working with Professor Eden Medina on a PhD thesis in the Program in Science, Technology, and Society, which will chart the history of technology’s influence on surveillance and privacy, particularly in her home country.

“I would love for my work, theoretical and practical, to build into these global movements for necessary and proportionate surveillance,” she says. “It needs to have counterweights and limits, and it needs to be really thought through to preserve people’s privacy and other rights, not just security.”

“More broadly,” she continues, “I would love to be part of a generation thinking about what technology would look like if we put the public interest first.”

Growing up alongside the internet

García-Montes has been thinking about justice and the public for much of her life, thanks in large part to her mother, who taught philosophy at the university level.

“She was the ultimate professor for me,” she says. “She provided me with a moral compass and intellectual curiosity, and I’m grateful I get to live her dreams.”

Her mother was also instrumental in piquing her interest in the internet. As a professor, she had access to the internet at a time when few Mexicans did, and set García-Montes up with an email account and allowed her to use the computer at the university when she was a child. The experience was formative, as she noticed the “vast difference” between those who had access and those who did not. For example, she recalls learning online about a devastating tsunami in Asia, while none of her peers had any idea that it was happening.

As time passed and more and more people did gain internet access, the online landscape changed, particularly for young people. García-Montes quickly realized that someone needed to take responsibility for keeping those young people safe and internet-literate, and she worked with a number of organizations that did just that, such as UNICEF and Global Changemakers. The issues have only compounded since then, but she isn’t letting up either.

“There’s no silver bullet,” she says. “We need to rethink the entire ecosystem. We cannot put it on parents to teach their kids. We cannot put it on teachers. We cannot put it on online users. Instead of only centering profit and only centering page views or engagement, we need to also center pro-social behavior and the public interest.”

Raised by women — her mom, her aunt, her cousin, and her grandmother — García-Montes incorporates the feminist ideals of her upbringing into her academic work wherever she can. In 2022, she helped write a paper with MIT associate professor of urban science and planning Catherine D’Ignazio that examined the ways activists around the world are trying to address the deficiencies in government data on gender-related violence against women. The data are often absent or incomplete, so she and her co-authors highlighted the vital work being done to fill in the gaps.

“​​When Catherine started to work with feminicide data activists, I knew a bunch of them because I had worked with them previously,” she says. “I thought, ‘Oh, my goodness, the day has finally come that these people can have the prominence that they’ve long deserved.’ The hours of work that they put in and the emotional toll it takes on them is just outstanding, and they weren’t really getting the recognition for that labor and their technical expertise.”

Her dissertation is a study of the history of surveillance technologies in Mexico. Specifically, she is looking at the ways contemporary debates on information technologies, such as spyware and facial recognition, interact with existing governance and infrastructures.

The future of privacy and community

Her thesis research has instilled in García-Montes a deep concern for where things are headed for the average citizen.

“Different types of data collection continue to be developed because of the data broker industry,” she says. “Your power bill can be an instrument of surveillance, and facial recognition has been appearing in airports. The forms of data collection are becoming much more nuanced, much more pervasive, and much harder to evade.”

This pervasiveness has led to a general acceptance among the population, she says, but she’s also encouraged by the advocacy groups that have continued to fight on. She agrees with those groups that it should not be left to individuals to protect their own data, and that ultimately, there needs to be a legislative and cultural environment that values the preservation of privacy.

“The awareness of fights that have been won is rising,” she says. “The awareness of the loss of privacy is also rising, and so I don’t think that it’s going to be a clear win for privacy-violating companies.”

While her studies at MIT fill most of her time, García-Montes also finds purpose participating in community life in her Greater-Boston neighborhood. During the coronavirus pandemic, García-Montes and her neighbors forged bonds as they provided mutual aid for the essential workers and vulnerable people of their neighborhood. The camaraderie they developed persists today.

Whether online or in real life, “There is joy in community,” she says. “At the root of it, I want to be around people. I want to know my neighbors, and being able to use technology to solve some of our mutual aid needs helps me feel good.”

Study: Early dark energy could resolve cosmology’s two biggest puzzles

A new study by MIT physicists proposes that a mysterious force known as early dark energy could solve two of the biggest puzzles in cosmology and fill in some major gaps in our understanding of how the early universe evolved.

One puzzle in question is the “Hubble tension,” which refers to a mismatch in measurements of how fast the universe is expanding. The other involves observations of numerous early, bright galaxies that existed at a time when the early universe should have been much less populated.

Now, the MIT team has found that both puzzles could be resolved if the early universe had one extra, fleeting ingredient: early dark energy. Dark energy is an unknown form of energy that physicists suspect is driving the expansion of the universe today. Early dark energy is a similar, hypothetical phenomenon that may have made only a brief appearance, influencing the expansion of the universe in its first moments before disappearing entirely.

Some physicists have suspected that early dark energy could be the key to solving the Hubble tension, as the mysterious force could accelerate the early expansion of the universe by an amount that would resolve the measurement mismatch.

The MIT researchers have now found that early dark energy could also explain the baffling number of bright galaxies that astronomers have observed in the early universe. In their new study, reported today in the Monthly Notices of the Royal Astronomical Society, the team modeled the formation of galaxies in the universe’s first few hundred million years. When they incorporated a dark energy component only in that earliest sliver of time, they found the number of galaxies that arose from the primordial environment bloomed to fit astronomers’ observations.

You have these two looming open-ended puzzles,” says study co-author Rohan Naidu, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “We find that in fact, early dark energy is a very elegant and sparse solution to two of the most pressing problems in cosmology.”

The study’s co-authors include lead author and Kavli postdoc Xuejian (Jacob) Shen, and MIT professor of physics Mark Vogelsberger, along with Michael Boylan-Kolchin at the University of Texas at Austin, and Sandro Tacchella at the University of Cambridge.

Big city lights

Based on standard cosmological and galaxy formation models, the universe should have taken its time spinning up the first galaxies. It would have taken billions of years for primordial gas to coalesce into galaxies as large and bright as the Milky Way.

But in 2023, NASA’s James Webb Space Telescope (JWST) made a startling observation. With an ability to peer farther back in time than any observatory to date, the telescope uncovered a surprising number of bright galaxies as large as the modern Milky Way within the first 500 million years, when the universe was just 3 percent of its current age.

“The bright galaxies that JWST saw would be like seeing a clustering of lights around big cities, whereas theory predicts something like the light around more rural settings like Yellowstone National Park,” Shen says. “And we don’t expect that clustering of light so early on.”

For physicists, the observations imply that there is either something fundamentally wrong with the physics underlying the models or a missing ingredient in the early universe that scientists have not accounted for. The MIT team explored the possibility of the latter, and whether the missing ingredient might be early dark energy.

Physicists have proposed that early dark energy is a sort of antigravitational force that is turned on only at very early times. This force would counteract gravity’s inward pull and accelerate the early expansion of the universe, in a way that would resolve the mismatch in measurements. Early dark energy, therefore, is considered the most likely solution to the Hubble tension.

Galaxy skeleton

The MIT team explored whether early dark energy could also be the key to explaining the unexpected population of large, bright galaxies detected by JWST. In their new study, the physicists considered how early dark energy might affect the early structure of the universe that gave rise to the first galaxies. They focused on the formation of dark matter halos — regions of space where gravity happens to be stronger, and where matter begins to accumulate.

“We believe that dark matter halos are the invisible skeleton of the universe,” Shen explains. “Dark matter structures form first, and then galaxies form within these structures. So, we expect the number of bright galaxies should be proportional to the number of big dark matter halos.”

The team developed an empirical framework for early galaxy formation, which predicts the number, luminosity, and size of galaxies that should form in the early universe, given some measures of “cosmological parameters.” Cosmological parameters are the basic ingredients, or mathematical terms, that describe the evolution of the universe.

Physicists have determined that there are at least six main cosmological parameters, one of which is the Hubble constant — a term that describes the universe’s rate of expansion. Other parameters describe density fluctuations in the primordial soup, immediately after the Big Bang, from which dark matter halos eventually form.

The MIT team reasoned that if early dark energy affects the universe’s early expansion rate, in a way that resolves the Hubble tension, then it could affect the balance of the other cosmological parameters, in a way that might increase the number of bright galaxies that appear at early times. To test their theory, they incorporated a model of early dark energy (the same one that happens to resolve the Hubble tension) into an empirical galaxy formation framework to see how the earliest dark matter structures evolve and give rise to the first galaxies.

“What we show is, the skeletal structure of the early universe is altered in a subtle way where the amplitude of fluctuations goes up, and you get bigger halos, and brighter galaxies that are in place at earlier times, more so than in our more vanilla models,” Naidu says. “It means things were more abundant, and more clustered in the early universe.”

“A priori, I would not have expected the abundance of JWST’s early bright galaxies to have anything to do with early dark energy, but their observation that EDE pushes cosmological parameters in a direction that boosts the early-galaxy abundance is interesting,” says Marc Kamionkowski, professor of theoretical physics at Johns Hopkins University, who was not involved with the study. “I think more work will need to be done to establish a link between early galaxies and EDE, but regardless of how things turn out, it’s a clever — and hopefully ultimately fruitful — thing to try.”

We demonstrated the potential of early dark energy as a unified solution to the two major issues faced by cosmology. This might be an evidence for its existence if the observational findings of JWST get further consolidated,” Vogelsberger concludes. “In the future, we can incorporate this into large cosmological simulations to see what detailed predictions we get.”

This research was supported, in part, by NASA and the National Science Foundation.

3 Questions: The past, present, and future of sustainability science

It was 1978, over a decade before the word “sustainable” would infiltrate environmental nomenclature, and Ronald Prinn, MIT professor of atmospheric science, had just founded the Advanced Global Atmospheric Gases Experiment (AGAGE). Today, AGAGE provides real-time measurements for well over 50 environmentally harmful trace gases, enabling us to determine emissions at the country level, a key element in verifying national adherence to the Montreal Protocol and the Paris Accord. This, Prinn says, started him thinking about doing science that informed decision making.

Much like global interest in sustainability, Prinn’s interest and involvement continued to grow into what would become three decades worth of achievements in sustainability science. The Center for Global Change Science (CGCS) and Joint Program on the Science and Policy Global Change, respectively founded and co-founded by Prinn, have recently joined forces to create the MIT School of Science’s new Center for Sustainability Science and Strategy (CS3), lead by former CGCS postdoc turned MIT professor, Noelle Selin.

As he prepares to pass the torch, Prinn reflects on how far sustainability has come, and where it all began.

Q: Tell us about the motivation for the MIT centers you helped to found around sustainability.

A: In 1990 after I founded the Center for Global Change Science, I also co-founded the Joint Program on the Science and Policy Global Change with a very important partner, [Henry] “Jake” Jacoby. He’s now retired, but at that point he was a professor in the MIT Sloan School of Management. Together, we determined that in order to answer questions related to what we now call sustainability of human activities, you need to combine the natural and social sciences involved in these processes. Based on this, we decided to make a joint program between the CGCS and a center that he directed, the Center for Energy and Environmental Policy Research (CEEPR).

It was called the “joint program” and was joint for two reasons — not only were two centers joining, but two disciplines were joining. It was not about simply doing the same science. It was about bringing a team of people together that could tackle these coupled issues of environment, human development and economy. We were the first group in the world to fully integrate these elements together.

Q: What has been your most impactful contribution and what effect did it have on the greater public’s overall understanding?

A: Our biggest contribution is the development, and more importantly, the application of the Integrated Global System Model [IGSM] framework, looking at human development in both developing countries and developed countries that had a significant impact on the way people thought about climate issues. With IGSM, we were able to look at the interactions among human and natural components, studying the feedbacks and impacts that climate change had on human systems; like how it would alter agriculture and other land activities, how it would alter things we derive from the ocean, and so on.

Policies were being developed largely by economists or climate scientists working independently, and we started showing how the real answers and analysis required a coupling of all of these components. We showed, and I think convincingly, that what people used to study independently, must be coupled together, because the impacts of climate change and air pollution affected so many things.

To address the value of policy, despite the uncertainty in climate projections, we ran multiple runs of the IGSM with and without policy, with different choices for uncertain IGSM variables. For public communication, around 2005, we introduced our signature Greenhouse Gamble interactive visualization tools; these have been renewed over time as science and policies evolved.

Q: What can MIT provide now at this critical juncture in understanding climate change and its impact?

A: We need to further push the boundaries of integrated global system modeling to ensure full sustainability of human activity and all of its beneficial dimensions, which is the exciting focus that the CS3 is designed to address. We need to focus on sustainability as a central core element and use it to not just analyze existing policies but to propose new ones. Sustainability is not just climate or air pollution, it’s got to do with human impacts in general. Human health is central to sustainability, and equally important to equity. We need to expand the capability for credibly assessing what the impact policies have not just on developed countries, but on developing countries, taking into account that many places around the world are at artisanal levels of their economies. They cannot be blamed for anything that is changing climate and causing air pollution and other detrimental things that are currently going on. They need our help. That’s what sustainability is in its full dimensions.

Our capabilities are evolving toward a modeling system so detailed that we can find out detrimental things about policies even at local levels before investing in changing infrastructure. This is going to require collaboration among even more disciplines and creating a seamless connection between research and decision making; not just for policies enacted in the public sector, but also for decisions that are made in the private sector. 

Startup’s displays engineer light to create immersive experiences without the headsets

One of the biggest reasons virtual reality hasn’t taken off is the clunky headsets that users have to wear. But what if you could get the benefits of virtual reality without the headsets, using screens that computationally improve the images they display?

That’s the goal of the startup Brelyon, which is commercializing a new kind of display and content-rendering approach that immerses users in virtual worlds without requiring them to strap goggles onto their heads.

The displays run light through a processing layer before it reaches users’ eyes, recalculating the image to create ultrawide, visual experiences with depth. The company is also working on a new kind of content-rendering architecture to generate more visually efficient imagery. The result is a 120-inch screen that simulates the sensation of looking out a window into a virtual world, where content pops in and out of existence at different angles and depths, depending on what you feed the display.

“Our current displays use different properties of light, specifically the wavefront of the electric field,” says Brelyon co-founder and CEO Barmak Heshmat, a former postdoc in the Media Lab. “In our newest architecture, the display uses a stack of shader programming empowered with inference microservices to modify and generate content on the fly, amplifying your immersion with the screens.”

Customers are already using Brelyon’s current displays in flight simulators, gaming, defense, and teleoperations, and Heshmat says the company is actively scaling its manufacturing capacity to meet growing demand.

“Wherever you want to increase visual efficiency with screens, Brelyon can help,” Heshmat says. “Optically, these virtual displays allow us to craft a much larger, control-center-like experience without needing added space or wearing headsets, and at the compute level our rerendering architectures allow us to use every bit of that screen in most efficient way.”

Of light and math

Heshmat came to MIT in 2013 as a postdoc in the Media Lab’s Camera Culture group, which is directed by Associate Professor Ramesh Raskar. At the Media Lab, Heshmat worked on computational imaging, which he describes as “combining mathematics with the physics of light to do interesting things.”

With Raskar, Heshmat worked on a new approach to improving ultrafast cameras that used time as an extra dimension in optical design.

“The system essentially sent light through an array of mirrors to make the photons bounce many times inside the camera,” Heshmat explains. “It allowed us to capture the image at many different times.”

Heshmat worked across campus, ultimately publishing papers with five different professors, and says his experience at MIT helped change the way he perceived himself.

“There were many things that I took from MIT,” Heshmat says. “Beyond the technical expertise, I also got the confidence and belief that I could be a leader. That’s what’s different about MIT compared to other schools: It’s a very vibrant, intellectually-triggering environment where everyone’s very driven and everyone’s creating their own universe, in a sense.”

After graduating, Heshmat worked at a virtual reality company, where he noticed that people liked the idea of virtual reality but didn’t like wearing headsets. The observation led him to explore ways of achieving immersion without strapping a device to his head.

The idea brought him back to his research with Raskar at MIT.

“There’s this relationship between imaging and displays; they’re kind of like a dual of each other,” Heshmat explains. “What you can do with imaging, the inverse of it is doable with displays. Since I’d worked on this imaging system at MIT, what’s called time-folded imaging, I thought to try the inverse of that in the world of displays. That was how Brelyon started.”

Brelyon’s first check came from the MIT-affiliated E14 Fund after Heshmat built a prototype of the first device in his living room.

Brelyon’s displays control the angles and focus of light to simulate wide, deep views and give the impression of looking through a window. Brelyon currently sells two displays, Ultra Reality and Ultra Reality Mini. The Ultra Reality display offers a 10-foot-wide display and a depth of around 3 feet. The displays are fully compatible with standard laptops and computers, so users can connect their devices via an HDMI cable and run their favorite simulation or gaming software right away, which Heshmat notes is a key benefit over traditional, headset-based virtual reality displays that require companies to create custom software.

“This is a plug-and-play solution that is much smaller than setting up a projection screen, doesn’t require a dedicated room, doesn’t require a special environment, doesn’t need alignment of projectors or any of that,” Heshmat says.

Processing light

Heshmat says Brelyon has sold displays to some of the largest simulation training companies in the world.

“In simulation training, you usually care about large visualizations and large peripheral fields of view, or situational awareness,” Heshmat says. “That allows you to look around in, say, the cockpit of the airplane. Brelyon allows you to do that in the size of a single desktop monitor.”

Brelyon has been focused on selling its displays to other businesses to date, but Heshmat hopes to eventually sell to individuals and believes the company’s displays hold huge potential for anyone who wants to improve the experience of looking at a monitor.

“Imagine you’re sitting in the backseat of a car, and instead of looking at a 12-inch tablet, you have this 14-inch or 12-inch aperture, but this aperture is looking into a much larger image, so you have a window to an IMAX theater,” Heshmat says.

Ultimately, Heshmat believes Brelyon is opening up a new platform to change the way we perceive the digital world.

“We are adding a new layer of control between the world of computers and what your eyes see,” Heshmat explains. “We have this new proton-processing layer on top of displays, and we think we’re bridging the gap between the experience that you see and the world of computers. We’re trying to connect that programming all the way to the end processing of photons. There are some exciting opportunities that come from that. The displays of future won’t just let the light out just like an array of lamps. They’ll run light through these photon processors and allow you to do much more with light.”

3 Questions: What does innovation look like in the field of substance use disorder?

In 2020, more than 278,000 people died from substance use disorder with over 91,000 of those from overdoses. Just three years later, deaths from overdoses alone rose by over 25,000. Despite its magnitude, the substance use disorder crisis still faces fundamental challenges: a prevailing societal stigma, lack of knowledge around its origin in the brain, and the slow pace of innovation in comparison to other diseases.

Work at MIT is contributing to meaningful innovations in the field of substance use disorder, according to Hanna Adeyema MBA ’13, director of MIT Bootcamps at MIT Open Learning, and Carolina Haass-Koffler, associate professor of psychiatry and human behavior at Brown University.

Adeyema is leading an upcoming MIT Bootcamps Substance Use Disorder (SUD) Ventures program. She was the chief operating officer and co-founder of Tenacity, a startup based on research from the MIT Media Lab founded to reduce burnout for call center workers. Haass-Koffler is a translational investigator who coalesces preclinical and clinical research towards examining biobehavioral mechanisms of addiction and developing novel medications. She was a finalist for the 2023-24 MIT-Royalty Pharma Prize Competition, an award supporting female entrepreneurs in biotech and the winner of the 2024 Brown Biomedical Innovation to Impact translational commercial development program that supports innovative proof-of-concept projects. In 2023, Haass-Koffler produced a substance use disorder 101 course for the SUD Ventures program and secured non-dilutive funding from the NIH toward work in innovation in this area. Here, Adeyema and Haass-Koffler join in a discussion about the substance use disorder crisis and the future of innovation in this field.

Q: What are the major obstacles to making meaningful advances in substance use disorder research and treatment and/or innovation?

Adeyema: The complexity of the substance use disorder market and the incredible amount of knowledge required to innovate is a major obstacle to bringing research from the bench to market. Innovators must not only understand their technical domain in great detail, but also federal regulations, state regulations, and payers in the health care sector. On top of this, they must know how to pitch to specialized investors, how to sell to hospitals, and understand how to interact with vulnerable populations — often all at the same time.

Given this, solving the substance use disorder epidemic will require a multidisciplinary approach — from health care innovators to researchers to government officials and everyone in between. MIT is the right place to address innovation in the substance use disorder space because we have all of those talented people here and we know how to collaborate to solve societal problems at scale. An example of how we are working together in this way is the collaboration with the National Institutes of Health and the National Institute of Drug Abuse to create the SUD Ventures program. The goal of this program is to fuel the next generation of innovation in substance use disorder with practical applications and a pipeline to securing non-dilutive government funding from Small Business Innovation Research grants.

Haass-Kolffer: Before even mentioning substance use disorder, there are a number of barriers in health care that already exist, such as health insurance reimbursement, limited availability of resources, shortage of clinicians, and more. Specifically in substance use disorder, there are additional barriers affecting patients, clinicians, and innovators. Barriers on the clinical side include, but are not limited to, lack of resources available to providers and lack of time for physicians to include additional substance use disorder assessments in the few minutes that they spend with a patient during a clinical visit. Then on the patient side, the population is often composed of individuals from low socio-economic groups, which adds issues related to stigma, confidentiality and lack of referral network, and generally hinder development of novel substance use disorder treatment interventions. 

At a high level, we lack the integration of substance use disorder prevention, diagnostic, and treatment in health care settings. Without a more holistic integration, advancing substance use disorder research and innovation will continue to be extremely challenging. By creating a collaborative program where we can connect researchers, clinicians, and engineers, we have the opportunity to bring together a dynamic community of peers to tackle the biggest challenges in providing treatment of this debilitating disorder.

Q: How does the SUD Ventures program approach substance use disorder innovation differently?

Adeyema: Traditionally, innovation programs in the substance use disorder space focus on entrepreneurship and business courses for researchers and inventors. These courses focus on knowledge, rather than skills and practical application, and omit an important piece of building a business — it takes an entire ecosystem to build a successful startup, particularly in the health care space.

Our program will bring together the top U.S.-based substance use disorder researchers and experts in other disciplines. We hope to tap into MIT’s engineering excellence, clinical expertise from places like Massachusetts General Hospital, and other academic institutions like Harvard University and Brown University, which is a major center for substance use disorder research. With the vibrant entrepreneurship and biomedical expertise in the Boston ecosystem, we are excited to see how we can bring these incredible forces together. Participants will work together in teams to develop solutions in specific topic areas in substance use disorder. They are guided by MIT-trained entrepreneurs who have successfully funded and scaled companies in the health care space, and have access to a strong group of mentors like Nathaniel Sims, associate professor of anesthesia at Harvard Medical School and the Newbower/Eitan MGH Endowed Chair in Biomedical Technology Innovation at Massachusetts General Hospital.

We recognize the field has many idiosyncratic challenges, and it is also changing very, very fast. To shed light on the most recent and unique roadblocks, the SUD Ventures program will rely on industry case studies delivered by practitioners. These cases will be updated each year to contribute to a body of knowledge participants have access to not only during the program, but also after.

Q: Looking forward, what is the future of innovation in the substance use disorder field, and what are the promising innovations/therapies on the horizon?

Haass-Koffler: The opportunities to develop technologies to treat substance use disorder are infinite. Historically, the approach has been centered on neurobiology, focusing predominantly on the brain. However, substance use disorder is a complex disorder and lacks measurable biomarkers, which complicates its diagnosis and management. Given the brain’s connections with other bodily systems, targeting interventions beyond the central nervous system offers a promising avenue for more effective treatment.

To improve the efficiency of treatment by both researchers and clinicians, we need technological advancements that can probe brain function and monitor treatment responses with greater precision. Innovations in this area could lead to more tailored therapeutic approaches, enable earlier diagnosis, and improve overall patient care.

Just as glucose monitoring changed lives by managing insulin delivery in diabetes, there is a significant opportunity to create similar tools for monitoring medication responses, drug cravings, and preventing adverse events in patients with substance use disorder, affecting their lives tremendously. The future for the substance use disorder crisis is two-fold: it’s about saving lives by preventing overdoses today and improving quality of life by supporting patients throughout their extended treatment journeys. We are innovating and improving on both fronts of the crisis, and I am optimistic about the progress we will continue to make in treating this disease in the next couple of years. With government and political support, we are improving people’s lives and improving society.

The program and its research are supported by the National Institute on Drug Abuse (NIDA) of the National Institutes of Health (NIH). Cynthia Breazeal, a professor of media arts and sciences at the MIT Media Lab and dean for digital learning at MIT Open Learning, serves as the principal investigator (PI) on the grant.

Celebrating student entrepreneurship at delta v’s 2024 Demo Day

With this year’s delta v Demo Day, the Martin Trust Center for MIT Entrepreneurship proved two things: first, that students can make remarkable progress toward creating impactful new businesses over the course of a single summer; and second, that the Trust Center remains one of the best party-throwers on campus.

The Sept. 6 event, which was the culmination of a summer of work by students, revolved around 22 startups showcasing their business accomplishments in the delta v startup accelerator program. The event began with a member of each startup pitching to cheers and applause from a filled Kresge Auditorium and continued well into the night Friday with a reception that also featured live music, food and drinks, cheerleaders, and a 360-degree selfie camera for good measure.

The festivities were designed to celebrate each startup’s progress as well as inspire students in the audience to get involved with entrepreneurship at MIT.

“These teams have worked hard on their ventures all year long, particularly in the summer as part of the fully immersive delta v program,” said MIT Sloan School of Management Interim Dean Georgia Perakis. “Today marks further evidence of a point the Trust Center makes all the time: Entrepreneurship is a craft that can be taught.”

Startups go full throttle

This year’s Demo Day featured 50 students from 22 startup teams, each of whom described the problems they were solving and noted key early business achievements to boisterous applause over the course of two whirlwind hours of rapid-fire presentations.

Through the Trust Center’s delta v startup accelerator program, the students received guidance from mentors, funding, and worked through an action-oriented curriculum full-time between June and September.

The startups are tackling problems ranging from pet adoption to workplace burnout, cardiovascular disease in India, and energy storage at data centers.

One company, LymeAlert, is creating a kit that allows families to test ticks for the bacteria that cause Lyme disease, in their home. The device, which resembles an at-home Covid-19 test, gives results in 20 minutes or less.

“Lyme disease is the most common insect-transmitted disease in the U.S.,” says LymeAlert co-founder Erin Dawicki MBA ’24, who noted as a physician’s assistant she saw Lyme disease result in nerve damage, loss of balance, and personality changes in patients. “Our mission at LymeAlert is to improve access to health care through home tick testing. This will speed up the time to diagnosis, reduce the use of unnecessary antibiotics, and aid in local disease surveillance.”

Another company, Ogma, is using artificial intelligence to develop novel catalysts for biomanufacturing that are more sustainable than traditional enzymes. The company is seeking to reduce the industry’s reliance on petrochemical-based products and remove the pollution associated with their production.

“Getting inspiration from nature, we have engineered the first ever nanocatalysts that look and function exactly like natural ones, but are stable, cost efficient, and they’re made for complex reactions, making them the perfect fit for large-scale industrial applications,” explained co-founder Richard Robinet-Duffo.

Ogma’s technology was developed in the MIT Laboratory for Soft Materials and will be deployed in three pilots with cleaning companies this fall.

The other startups in this year’s cohort include the following:

All Unique Objects is using AI to convert sketches into 3D models, simplifying the design process for the home decor and furniture industry.

COIL provides a digital platform using machine learning to offer personalized hair care solutions for Black women with textured hair.

Continuity is developing a minimally invasive wearable device to continuously monitor real-time molecular changes in the human body.

EQORE offers smart energy storage systems to reduce demand charges and cut electricity bills for industrial facilities by up to 30 percent.

Expat AI helps immigrants complete U.S. immigration forms with AI-powered, native language assistance, similar to TurboTax for immigration.

Fount is building an AI co-pilot for insurance marketers, optimizing ad spend and acquisition strategies across platforms to target high-value customers.

Health Galaxy is promoting heart health awareness and navigation for young people in India through a connected platform.

Health+ offers an AI-powered solution for workplace mental health, preventing burnout and boosting productivity for high-stress professionals.

Helix Carbon transforms captured carbon dioxide into carbon-neutral fuels and chemicals for industries like steelmaking and petrochemicals.

Intendere is a software that helps universities scale tutoring programs, empowering students to make an impact in their communities.

LeadQualify leverages AI to analyze prospecting data, helping investment banks identify and engage with high-potential clients.

MakerSharks automates procurement processes by connecting businesses with vetted manufacturers, reducing sourcing time by up to 70 percent.

Mashi simplifies pet adoption with a universal application platform that matches adopters with pets and offers post-adoption recommendations.

Otomo offers AI-powered clinical workflows and personalized patient engagement tools to allow physicians to focus more on patient care.

Pixca uses AI to improve onboarding and communication for greenhouse workers, standardizing processes to boost agricultural productivity.

Psyche provides caregivers with tools to support their children’s mental health at home, helping reduce youth mental health crises.

Sakhi offers an AI-powered health literacy platform that provides expectant mothers in India with personalized, real-time health care information.

Tarragon Systems uses AI-backed demand forecasts to reduce waste in restaurants by optimizing food inventory and preparation processes.

Thinkstruct accelerates the literature review process for researchers by providing a platform to find, extract, and visualize academic papers.

Entrepreneurship as a discipline

The event also served to celebrate the impact of MIT’s entrepreneurial ecosystem more broadly. Trust Center Managing Director Bill Aulet noted that the students on stage benefitted from entrepreneurial support resources from across the Institute.

“No one up here is doing it alone,” Aulet said. “So many of our colleagues beyond the Trust Center have supported these students in their journey from inspiration to what we call ‘escape velocity.’ MIT has the teaching and the research, and entrepreneurship is that third pillar that makes the teaching and research that much more valuable and impactful.”

Perakis pointed to the pioneering research done by former MIT Sloan Professor Edward B. Roberts ’58, SM ’58, SM ’60, PhD ’62, who passed away in February. Roberts later co-authored a report estimating that, as of 2014, MIT alumni had launched 30,200 active companies employing roughly 4.6 million people.

Aulet said events like Demo Day helped further Roberts’ belief that entrepreneurship should be promoted more intentionally around the world.

“People don’t take entrepreneurship as seriously as they should, but MIT is changing that,” Aulet said. “We’re making entrepreneurship into a rigorous field of study with a rigorous curriculum that’s evidence-based, just like we did for chemical engineering in the 1890s.”