Startup helps people fall asleep by aligning audio signals with brainwaves

Do you ever toss and turn in bed after a long day, wishing you could just program your brain to turn off and get some sleep?

That may sound like science fiction, but that’s the goal of the startup Elemind, which is using an electroencephalogram (EEG) headband that emits acoustic stimulation aligned with people’s brainwaves to move them into a sleep state more quickly.

In a small study of adults with sleep onset insomnia, 30 minutes of stimulation from the device decreased the time it took them to fall asleep by 10 to 15 minutes. This summer, Elemind began shipping its product to a small group of users as part of an early pilot program.

The company, which was founded by MIT Professor Ed Boyden ’99, MNG ’99; David Wang ’05, SM ’10, PhD ’15; former postdoc Nir Grossman; former Media Lab research affiliate Heather Read; and Meredith Perry, plans to collect feedback from early users before making the device more widely available.

Elemind’s team believes their device offers several advantages over sleeping pills that can cause side effects and addiction.

“We wanted to create a nonchemical option for people who wanted to get great sleep without side effects, so you could get all the benefits of natural sleep without the risks,” says Perry, Elemind’s CEO. “There’s a number of people that we think would benefit from this device, whether you’re a breastfeeding mom that might not want to take a sleep drug, somebody traveling across time zones that wants to fight jet lag, or someone that simply wants to improve your next-day performance and feel like you have more control over your sleep.”

From research to product

Wang’s academic journey at MIT spanned nearly 15 years, during which he earned four degrees, culminating in a PhD in artificial intelligence in 2015. In 2014, Wang was co-teaching a class with Grossman when they began working together to noninvasively measure real-time biological oscillations in the brain and body. Through that work, they became fascinated with a technique for modulating the brain known as phase-locked stimulation, which uses precisely timed visual, physical, or auditory stimulation that lines up with brain activity.

“You’re measuring some kind of changing variable, and then you want to change your stimulus in real time in response to that variable,” explains Boyden, who pointed Wang and Grossman to a set of mathematical techniques that became some of the core intellectual property of Elemind.

Phase-locked stimulation has been used in conjunction with electrodes implanted in the brain to disrupt seizures and tremors for years. But in 2021, Wang, Grossman, Boyden, and their collaborators published a paper showing they could use electrical stimulation from outside the skull to suppress essential tremor syndrome, the most common adult movement disorder.

The results were promising, but the founders decided to start by proving their approach worked in a less regulated space: sleep. They developed a system to deliver auditory pulses timed to promote or suppress alpha oscillations in the brain, which are elevated in insomnia.

That kicked off a years-long product development process that led to the headband device Elemind uses today. The headband measures brainwaves through EEG and feeds the results into Elemind’s proprietary algorithms, which are used to dynamically generate audio through a bone conduction driver. The moment the device detects that someone is asleep, the audio is slowly tapered out.

“We have a theory that the sound that we play triggers an auditory-evoked response in the brain,” Wang says. “That means we get your auditory cortex to basically release this voltage burst that sweeps across your brain and interferes with other regions. Some people who have worn Elemind call it a brain jammer. For folks that ruminate a lot before they go to sleep, their brains are actively running. This encourages their brain to quiet down.”

Beyond sleep

Elemind has established a collaboration with eight universities that allows researchers to explore the effectiveness of the company’s approach in a range of use cases, from tremors to memory formation, Alzheimer’s progression, and more.

“We’re not only developing this product, but also advancing the field of neuroscience by collecting high-resolution data to hopefully also help others conduct new research,” Wang says.

The collaborations have led to some exciting results. Researchers at McGill University found that using Elemind’s acoustic stimulation during sleep increased activity in areas of the cortex related to motor function and improved healthy adults’ performance in memory tasks. Other studies have shown the approach can be used to reduce essential tremors in patients and enhance sedation recovery.

Elemind is focused on its sleep application for now, but the company plans to develop other solutions, from medical interventions to memory and focus augmentation, as the science evolves.

“The vision is how do we move beyond sleep into what could ultimately become like an app store for the brain, where you can download a brain state like you download an app?” Perry says. “How can we make this a tool that can be applied to a bunch of different applications with a single piece of hardware that has a lot of different stimulation protocols?”

Study evaluates impacts of summer heat in U.S. prison environments

Study evaluates impacts of summer heat in U.S. prison environments

When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.

A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.

The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.

Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat. 

The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.

“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”

The study’s co-authors include Danielle Wood, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.

Environmental intersections

The new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.

The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.

“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”

She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.

The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.

They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.

The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.

“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”

Heat factors

From their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.

“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.

While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.

“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”

The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.

“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”

This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism.

Fifteen Lincoln Laboratory technologies receive 2024 R&D 100 Awards

Fifteen Lincoln Laboratory technologies receive 2024 R&D 100 Awards

Fifteen technologies developed either wholly or in part by MIT Lincoln Laboratory have been named recipients of 2024 R&D 100 Awards. The awards are given by R&D World, an online publication that serves research scientists and engineers worldwide. Dubbed the “Oscars of Innovation,” the awards recognize the 100 most significant technologies transitioned to use or introduced into the marketplace in the past year. An independent panel of expert judges selects the winners.

“The R&D 100 Awards are a significant recognition of the laboratory’s technical capabilities and its role in transitioning technology for real-world impact,” says Melissa Choi, director of Lincoln Laboratory. “It is exciting to see so many projects selected for this honor, and we are proud of everyone whose creativity, curiosity, and technical excellence made these and many other Lincoln Laboratory innovations possible.”

The awarded technologies have a wide range of applications. A handful of them are poised to prevent human harm — for example, by monitoring for heat stroke or cognitive injury. Others present new processes for 3D printing glass, fabricating silicon imaging sensors, and interconnecting integrated circuits. Some technologies take on long-held challenges, such as mapping the human brain and the ocean floor. Together, the winners exemplify the creativity and breadth of Lincoln Laboratory innovation. Since 2010, the laboratory has received 101 R&D 100 Awards.

This year’s R&D 100 Award–winning technologies are described below.

Protecting human health and safety

The Neuron Tracing and Active Learning Environment (NeuroTrALE) software uses artificial intelligence techniques to create high-resolution maps, or atlases, of the brain’s network of neurons from high-dimensional biomedical data. NeuroTrALE addresses a major challenge in AI-assisted brain mapping: a lack of labeled data for training AI systems to build atlases essential for study of the brain’s neural structures and mechanisms. The software is the first end-to-end system to perform processing and annotation of dense microscopy data; generate segmentations of neurons; and enable experts to review, correct, and edit NeuroTrALE’s annotations from a web browser. This award is shared with the lab of Kwanghun (KC) Chung, associate professor in MIT’s Department of Chemical Engineering, Institute for Medical Engineering and Science, and Picower Institute for Learning and Memory.

Many military and law enforcement personnel are routinely exposed to low-level blasts in training settings. Often, these blasts don’t cause immediate diagnosable injury, but exposure over time has been linked to anxiety, depression, and other cognitive conditions. The Electrooculography and Balance Blast Overpressure Monitoring (EYEBOOM) is a wearable system developed to monitor individuals’ blast exposure and notify them if they are at an increased risk of harm. It uses two body-worn sensors, one to capture continuous eye and body movements and another to measure blast energy. An algorithm analyzes these data to detect subtle changes in physiology, which, when combined with cumulative blast exposure, can be predictive of cognitive injury. Today, the system is in use by select U.S. Special Forces units. The laboratory co-developed EYEBOOM with Creare LLC and Lifelens LLC.

Tunable knitted stem cell scaffolds: The development of artificial-tissue constructs that mimic the natural stretchability and toughness of living tissue is in high demand for regenerative medicine applications. A team from Lincoln Laboratory and the MIT Department of Mechanical Engineering developed new forms of biocompatible fabrics that mimic the mechanical properties of native tissues while nurturing growing stem cells. These wearable stem-cell scaffolds can expedite the regeneration of skin, muscle, and other soft tissues to reduce recovery time and limit complications from severe burns, lacerations, and other bodily wounds.

Mixture deconvolution pipeline for forensic investigative genetic genealogy: A rapidly growing field of forensic science is investigative genetic genealogy, wherein investigators submit a DNA profile to commercial genealogy databases to identify a missing person or criminal suspect. Lincoln Laboratory’s software invention addresses a large unmet need in this field: the ability to deconvolve, or unravel, mixed DNA profiles of multiple unknown persons to enable database searching. The software pipeline estimates the number of contributors in a DNA mixture, the percentage of DNA present from each contributor, and the sex of each contributor; then, it deconvolves the different DNA profiles in the mixture to isolate two contributors, without needing to match them to a reference profile of a known contributor, as required by previous software.

Each year, hundreds of people die or suffer serious injuries from heat stroke, especially personnel in high-risk outdoor occupations such as military, construction, or first response. The Heat Injury Prevention System (HIPS) provides accurate, early warning of heat stroke several minutes in advance of visible symptoms. The system collects data from a sensor worn on a chest strap and employs algorithms for estimating body temperature, gait instability, and adaptive physiological strain index. The system then provides an individual’s heat-injury prediction on a mobile app. The affordability, accuracy, and user-acceptability of HIPS have led to its integration into operational environments for the military.

Observing the world

More than 80 percent of the ocean floor remains virtually unmapped and unexplored. Historically, deep sea maps have been generated either at low resolution from a large sonar array mounted on a ship, or at higher resolution with slow and expensive underwater vehicles. New autonomous sparse-aperture multibeam echo sounder technology uses a swarm of about 20 autonomous surface vehicles that work together as a single large sonar array to achieve the best of both worlds: mapping the deep seabed at 100 times the resolution of a ship-mounted sonar and 50 times the coverage rate of an underwater vehicle. New estimation algorithms and acoustic signal processing techniques enable this technology. The system holds potential for significantly improving humanitarian search-and-rescue capabilities and ocean and climate modeling. The R&D 100 Award is shared with the MIT Department of Mechanical Engineering.

FocusNet is a machine-learning architecture for analyzing airborne ground-mapping lidar data. Airborne lidar works by scanning the ground with a laser and creating a digital 3D representation of the area, called a point cloud. Humans or algorithms then analyze the point cloud to categorize scene features such as buildings or roads. In recent years, lidar technology has both improved and diversified, and methods to analyze the data have struggled to keep up. FocusNet fills this gap by using a convolutional neural network — an algorithm that finds patterns in images to recognize objects — to automatically categorize objects within the point cloud. It can achieve this object recognition across different types of lidar system data without needing to be retrained, representing a major advancement in understanding 3D lidar scenes.

Atmospheric observations collected from aircraft, such as temperature and wind, provide the highest-value inputs to weather forecasting models. However, these data collections are sparse and delayed, currently obtained through specialized systems installed on select aircraft. The Portable Aircraft Derived Weather Observation System (PADWOS) offers a way to significantly expand the quality and quantity of these data by leveraging Mode S Enhanced Surveillance (EHS) transponders, which are already installed on more than 95 percent of commercial aircraft and the majority of general aviation aircraft. From the ground, PADWOS interrogates Mode S EHS–equipped aircraft, collecting in milliseconds aircraft state data reported by the transponder to make wind and temperature estimates. The system holds promise for improving forecasts, monitoring climate, and supporting other weather applications.

Advancing computing and communications

Quantum networking has the potential to revolutionize connectivity across the globe, unlocking unprecedented capabilities in computing, sensing, and communications. To realize this potential, entangled photons distributed across a quantum network must arrive and interact with other photons in precisely controlled ways. Lincoln Laboratory’s precision photon synchronization system for quantum networking is the first to provide an efficient solution to synchronize space-to-ground quantum networking links to sub-picosecond precision. Unlike other technologies, the system performs free-space quantum entanglement distribution via a satellite, without needing to locate complex entanglement sources in space. These sources are instead located on the ground, providing an easily accessible test environment that can be upgraded as new quantum entanglement generation technologies emerge.

Superconductive many-state memory and comparison logic: Lincoln Laboratory developed circuits that natively store and compare greater than two discrete states, utilizing the quantized magnetic fields of superconductive materials. This property allows the creation of digital logic circuitry that goes beyond binary logic to ternary logic, improving memory throughput without significantly increasing the number of devices required or the surface area of the circuits. Comparing their superconducting ternary-logic memory to a conventional memory, the research team found that the ternary memory could pattern match across the entire digital Library of Congress nearly 30 times faster. The circuits represent fundamental building blocks for advanced, ultrahigh-speed and low-power digital logic.

The Megachip is an approach to interconnect many small, specialized chips (called chiplets) into a single-chip-like monolithic integrated circuit. Capable of incorporating billions of transistors, this interconnected structure extends device performance beyond the limits imposed by traditional wafer-level packaging. Megachips can address the increasing size and performance demands made on microelectronics used for AI processing and high-performance computing, and in mobile devices and servers.

An in-band full-duplex (IBDF) wireless system with advanced interference mitigation addresses the growing congestion of wireless networks. Previous IBFD systems have demonstrated the ability for a wireless device to transmit and receive on the same frequency at the same time by suppressing self-interference, effectively doubling the device’s efficiency on the frequency spectrum. These systems, however, haven’t addressed interference from external wireless sources on the same frequency. Lincoln Laboratory’s technology, for the first time, allows IBFD to mitigate multiple interference sources, resulting in a wireless system that can increase the number of devices supported, their data rate, and their communications range. This IBFD system could enable future smart vehicles to simultaneously connect to wireless networks, share road information, and self-drive — a capability not possible today.

Fabricating with novel processes

Lincoln Laboratory developed a nanocomposite ink system for 3D printing functional materials. Deposition using an active-mixing nozzle allows the generation of graded structures that transition gradually from one material to another. This ability to control the electromagnetic and geometric properties of a material can enable smaller, lighter, and less-power-hungry RF components while accommodating large frequency bandwidths. Furthermore, introducing different particles into the ink in a modular fashion allows the absorption of a wide range of radiation types. This 3D-printed shielding is expected to be used for protecting electronics in small satellites. This award is shared with Professor Jennifer Lewis’ research group at Harvard University.

The laboratory’s engineered substrates for rapid advanced imaging sensor development dramatically reduce the time and cost of developing advanced silicon imaging sensors. These substrates prebuild most steps of the back-illumination process (a method to increase the amount of light that hits a pixel) directly into the starting wafer, before device fabrication begins. Then, a specialized process allows the detector substrate and readout circuits to be mated together and uniformly thinned to microns in thickness at the die level rather than at the wafer level. Both aspects can save a project millions of dollars in fabrication costs by enabling the production of small batches of detectors, instead of a full wafer run, while improving sensor noise and performance. This platform has allowed researchers to prototype new imaging sensor concepts — including detectors for future NASA autonomous lander missions — that would have taken years to develop in a traditional process.

Additive manufacturing, or 3D printing, holds promise for fabricating complex glass structures that would be unattainable with traditional glass manufacturing techniques. Lincoln Laboratory’s low-temperature additive manufacturing of glass composites allows 3D printing of multimaterial glass items without the need for costly high-temperature processing. This low-temperature technique, which cures the glass at 250 degrees Celsius as compared to the standard 1,000 C, relies on simple components: a liquid silicate solution, a structural filler, a fumed nanoparticle, and an optional functional additive to produce glass with optical, electrical, or chemical properties. The technique could facilitate the widespread adoption of 3D printing for glass devices such as microfluidic systems, free-form optical lenses or fiber, and high-temperature electronic components.

The researchers behind each R&D 100 Award–winning technology will be honored at an awards gala on Nov. 21 in Palm Springs, California.

Research quantifying “nociception” could help improve management of surgical pain

Research quantifying “nociception” could help improve management of surgical pain

The degree to which a surgical patient’s subconscious processing of pain, or “nociception,” is properly managed by their anesthesiologist will directly affect the degree of post-operative drug side effects they’ll experience and the need for further pain management they’ll require. But pain is a subjective feeling to measure, even when patients are awake, much less when they are unconscious. 

In a new study appearing in the Proceedings of the National Academy of Sciences, MIT and Massachusetts General Hospital (MGH) researchers describe a set of statistical models that objectively quantified nociception during surgery. Ultimately, they hope to help anesthesiologists optimize drug dose and minimize post-operative pain and side effects.

The new models integrate data meticulously logged over 18,582 minutes of 101 abdominal surgeries in men and women at MGH. Led by Sandya Subramanian PhD ’21, an assistant professor at the University of California at Berkeley and the University of California at San Francisco, the researchers collected and analyzed data from five physiological sensors as patients experienced a total of 49,878 distinct “nociceptive stimuli” (such as incisions or cautery). Moreover, the team recorded what drugs were administered, and how much and when, to factor in their effects on nociception or cardiovascular measures. They then used all the data to develop a set of statistical models that performed well in retrospectively indicating the body’s response to nociceptive stimuli.

The team’s goal is to furnish such accurate, objective, and physiologically principled information in real time to anesthesiologists who currently have to rely heavily on intuition and past experience in deciding how to administer pain-control drugs during surgery. If anesthesiologists give too much, patients can experience side effects ranging from nausea to delirium. If they give too little, patients may feel excessive pain after they awaken.

“Sandya’s work has helped us establish a principled way to understand and measure nociception (unconscious pain) during general anesthesia,” says study senior author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. Brown is also an anesthesiologist at MGH and a professor at Harvard Medical School. “Our next objective is to make the insights that we have gained from Sandya’s studies reliable and practical for anesthesiologists to use during surgery.”

Surgery and statistics

The research began as Subramanian’s doctoral thesis project in Brown’s lab in 2017. The best prior attempts to objectively model nociception have either relied solely on the electrocardiogram (ECG, an indirect indicator of heart-rate variability) or other systems that may incorporate more than one measurement, but were either based on lab experiments using pain stimuli that do not compare in intensity to surgical pain or were validated by statistically aggregating just a few time points across multiple patients’ surgeries, Subramanian says.

“There’s no other place to study surgical pain except for the operating room,” Subramanian says. “We wanted to not only develop the algorithms using data from surgery, but also actually validate it in the context in which we want someone to use it. If we are asking them to track moment-to-moment nociception during an individual surgery, we need to validate it in that same way.”

So she and Brown worked to advance the state of the art by collecting multi-sensor data during the whole course of actual surgeries and by accounting for the confounding effects of the drugs administered. In that way, they hoped to develop a model that could make accurate predictions that remained valid for the same patient all the way through their operation.

Part of the improvements the team achieved arose from tracking patterns of heart rate and also skin conductance. Changes in both of these physiological factors can be indications of the body’s primal “fight or flight” response to nociception or pain, but some drugs used during surgery directly affect cardiovascular state, while skin conductance (or “EDA,” electrodermal activity) remains unaffected. The study measures not only ECG but also backs it up with PPG, an optical measure of heart rate (like the oxygen sensor on a smartwatch), because ECG signals can sometimes be made noisy by all the electrical equipment buzzing away in the operating room. Similarly, Subramanian backstopped EDA measures with measures of skin temperature to ensure that changes in skin conductance from sweat were because of nociception and not simply the patient being too warm. The study also tracked respiration.

Then the authors performed statistical analyses to develop physiologically relevant indices from each of the cardiovascular and skin conductance signals. And once each index was established, further statistical analysis enabled tracking the indices together to produce models that could make accurate, principled predictions of when nociception was occurring and the body’s response.

Nailing nociception

In four versions of the model, Subramanian “supervised” them by feeding them information on when actual nociceptive stimuli occurred so that they could then learn the association between the physiological measurements and the incidence of pain-inducing events. In some of these trained versions she left out drug information and in some versions she used different statistical approaches (either “linear regression” or “random forest”). In a fifth version of the model, based on a “state space” approach, she left it unsupervised, meaning it had to learn to infer moments of nociception purely from the physiological indices. She compared all five versions of her model to one of the current industry standards, an ECG-tracking model called ANI.

Each model’s output can be visualized as a graph plotting the predicted degree of nociception over time. ANI performs just above chance but is implemented in real-time. The unsupervised model performed better than ANI, though not quite as well as the supervised models. The best performing of those was one that incorporated drug information and used a “random forest” approach. Still, the authors note, the fact that the unsupervised model performed significantly better than chance suggests that there is indeed an objectively detectable signature of the body’s nociceptive state even when looking across different patients.

“A state space framework using multisensory physiological observations is effective in uncovering this implicit nociceptive state with a consistent definition across multiple subjects,” wrote Subramanian, Brown, and their co-authors. “This is an important step toward defining a metric to track nociception without including nociceptive ‘ground truth’ information, most practical for scalability and implementation in clinical settings.”

Indeed, the next steps for the research are to increase the data sampling and to further refine the models so that they can eventually be put into practice in the operating room. That will require enabling them to predict nociception in real time, rather than in post-hoc analysis. When that advance is made, that will enable anesthesiologists or intensivists to inform their pain drug dosing judgements. Further into the future, the model could inform closed-loop systems that automatically dose drugs under the anesthesiologist’s supervision.

“Our study is an important first step toward developing objective markers to track surgical nociception,” the authors concluded. “These markers will enable objective assessment of nociception in other complex clinical settings, such as the ICU [intensive care unit], as well as catalyze future development of closed-loop control systems for nociception.”

In addition to Subramanian and Brown, the paper’s other authors are Bryan Tseng, Marcela del Carmen, Annekathryn Goodman, Douglas Dahl, and Riccardo Barbieri.

Funding from The JPB Foundation; The Picower Institute; George J. Elbaum ’59, SM ’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; Cathy and Lou Paglia; annual donors to the Anesthesia Initiative Fund; the National Science Foundation; and an MIT Office of Graduate Education Collabmore-Rogers Fellowship supported the research.

3 Questions: Should we label AI systems like we do prescription drugs?

3 Questions: Should we label AI systems like we do prescription drugs?

AI systems are increasingly being deployed in safety-critical health care situations. Yet these models sometimes hallucinate incorrect information, make biased predictions, or fail for unexpected reasons, which could have serious consequences for patients and clinicians.

In a commentary article published today in Nature Computational Science, MIT Associate Professor Marzyeh Ghassemi and Boston University Associate Professor Elaine Nsoesie argue that, to mitigate these potential harms, AI systems should be accompanied by responsible-use labels, similar to U.S. Food and Drug Administration-mandated labels placed on prescription medications.

MIT News spoke with Ghassemi about the need for such labels, the information they should convey, and how labeling procedures could be implemented.

Q: Why do we need responsible use labels for AI systems in health care settings?

A: In a health setting, we have an interesting situation where doctors often rely on technology or treatments  that are not fully understood. Sometimes this lack of understanding is fundamental — the mechanism behind acetaminophen for instance — but other times this is just a limit of specialization. We don’t expect clinicians to know how to service an MRI machine, for instance. Instead, we have certification systems through the FDA or other federal agencies, that certify the use of a medical device or drug in a specific setting.

Importantly, medical devices also have service contracts — a technician from the manufacturer will fix your MRI machine if it is miscalibrated. For approved drugs, there are postmarket surveillance and reporting systems so that adverse effects or events can be addressed, for instance if a lot of people taking a drug seem to be developing a condition or allergy.

Models and algorithms, whether they incorporate AI or not, skirt a lot of these approval and long-term monitoring processes, and that is something we need to be wary of. Many prior studies have shown that predictive models need more careful evaluation and monitoring. With more recent generative AI specifically, we cite work that has demonstrated generation is not guaranteed to be appropriate, robust, or unbiased. Because we don’t have the same level of surveillance on model predictions or generation, it would be even more difficult to catch a model’s problematic responses. The generative models being used by hospitals right now could be biased. Having use labels is one way of ensuring that models don’t automate biases that are learned from human practitioners or miscalibrated clinical decision support scores of the past.      

Q: Your article describes several components of a responsible use label for AI, following the FDA approach for creating prescription labels, including approved usage, ingredients, potential side effects, etc. What core information should these labels convey?

A: The things a label should make obvious are time, place, and manner of a model’s intended use. For instance, the user should know that models were trained at a specific time with data from a specific time point. For instance, does it include data that did or did not include the Covid-19 pandemic? There were very different health practices during Covid that could impact the data. This is why we advocate for the model “ingredients” and “completed studies” to be disclosed.

For place, we know from prior research that models trained in one location tend to have worse performance when moved to another location. Knowing where the data were from and how a model was optimized within that population can help to ensure that users are aware of “potential side effects,” any “warnings and precautions,” and “adverse reactions.”

With a model trained to predict one outcome, knowing the time and place of training could help you make intelligent judgements about deployment. But many generative models are incredibly flexible and can be used for many tasks. Here, time and place may not be as informative, and more explicit direction about “conditions of labeling” and “approved usage” versus “unapproved usage” come into play. If a developer has evaluated a generative model for reading a patient’s clinical notes and generating prospective billing codes, they can disclose that it has bias toward overbilling for specific conditions or underrecognizing others. A user wouldn’t want to use this same generative model to decide who gets a referral to a specialist, even though they could. This flexibility is why we advocate for additional details on the manner in which models should be used.

In general, we advocate that you should train the best model you can, using the tools available to you. But even then, there should be a lot of disclosure. No model is going to be perfect. As a society, we now understand that no pill is perfect — there is always some risk. We should have the same understanding of AI models. Any model — with or without AI — is limited. It may be giving you realistic, well-trained, forecasts of potential futures, but take that with whatever grain of salt is appropriate.

Q: If AI labels were to be implemented, who would do the labeling and how would labels be regulated and enforced?

A: If you don’t intend for your model to be used in practice, then the disclosures you would make for a high-quality research publication are sufficient. But once you intend your model to be deployed in a human-facing setting, developers and deployers should do an initial labeling, based on some of the established frameworks. There should be a validation of these claims prior to deployment; in a safety-critical setting like health care, many agencies of the Department of Health and Human Services could be involved.

For model developers, I think that knowing you will need to label the limitations of a system induces more careful consideration of the process itself. If I know that at some point I am going to have to disclose the population upon which a model was trained, I would not want to disclose that it was trained only on dialogue from male chatbot users, for instance.

Thinking about things like who the data are collected on, over what time period, what the sample size was, and how you decided what data to include or exclude, can open your mind up to potential problems at deployment. 

Playing a new tune

For generations, Andrew Sutherland’s family had the same calling: bagpipes. Growing up in Halifax, Nova Scotia, in a family with Scottish roots, Sutherland’s father, grandfather, and great-grandfather all played the bagpipes competitively, criss-crossing North America. Sutherland’s aunts and uncles were pipers too.

But Sutherland did not take to the instrument. He liked math, went to college, entered a PhD program, and emerged as a professor at the MIT Sloan School of Management. Sutherland is an enterprising scholar whose work delves into issues around the financing and auditing of private firms, the effects of financial technology, and even detecting business fraud.

“I was actually the first male in my family to not play the bagpipes, and the first to go to university,” Sutherland explains. “The joke is that I’m the shame of the family, since I never picked up the pipes and continued the tradition.”

The family bagpiping loss is MIT’s gain. While Sutherland’s area of specialty is nominally accounting, his work has illuminated business practices more broadly.

“A lot of what we know about the financial system and how companies perform, and about financial statements, comes from big public companies,” Sutherland says. “But we have a lot of entrepreneurs come through Sloan looking to found startups, and in the U.S., private firms generate more than half of employment and investment. Until recently, we haven’t known a lot about how they get capital, how they make decisions.”

For his research and teaching, Sutherland was awarded tenure at MIT last year.

Piper at the gates of college

Sutherland is proud of his family history; his grandfather and great-grandfather have taught generations of bagpipe players in Nova Scotia, with many of their students becoming successful pipers around the world. But Sutherland took to math and business studies, receiving his undergraduate degree in commerce, with honors in accounting, from York University in Toronto. Then he received an MBA from Carnegie Mellon University, with concentrations in finance and quantitative analysis.

Sutherland still wanted to research financial markets, though. How did banks evaluate the private businesses they were lending to? How much were those firms disclosing to investors? How much just comes down to trust? He entered the PhD program at the University of Chicago’s Booth School of Business and found scholars encouraging him to pursue those questions.

That included Sutherland’s advisor, Christian Leuz; the long-time Chicago professor Douglas Diamond, now a Nobel Prize winner, whom Sutherland calls “one of the most generous researchers I’ve met” in academia; and a then-assistant professor, Michael Minnis, who shared Sutherland’s interest in studying private firms and entrepreneurs.

Sutherland earned his PhD from Chicago in 2015, with a dissertation about the changing nature of banker-to-business relationships, published in 2018. That research studied the effects of transparency-improving technologies on how small businesses obtained credit.

“Twenty years ago, banking was very relationship-based,” Sutherland says. “You might play golf with your loan officer once a year and they knew your business and maybe your employees, and they would sponsor the local softball team. Whereas now banking has been really influenced by technology. A lot of companies provide credit through online applications, and the days where you had to supply audited financial statements has gone away.” As a result of the expansion in technology-based lending, credit markets have shifted from a relationship basis to a transactional focus.

Sutherland, who is currently an associate professor at MIT, joined the faculty in 2015 and has remained at the Institute ever since. A fan of modern art, his office at MIT Sloan includes an Andy Warhol print, which is part of MIT’s art-lending program, as well as reproductions of some of Harold “Doc” Edgerton’s famous high-speed photographs.

Sutherland has since written five papers with Minnis (now a deputy dean at Chicago Booth), and other co-authors. Many of their findings highlight the variation in lending and contracting practices in the small business sector. In a 2017 study, they found that banks collected fewer verified financial statements from construction companies during the pre-2008 housing bubble than afterward; before 2008, lending had become lax, similar to what happened in the mortgage markets, and this contributed to the crisis. In another study from that year, they showed how banks with extensive industry and geographic expertise rely more on soft than hard information in lending.

“We’re trying to understand the ‘Wild West’ in accounting and finance more broadly,” Sutherland says. “For firms like entrepreneurs and privately held companies, largely unfettered by regulation, what choices do they make, and why? And how can we use economic theory to understand these choices?”

Business, trust, and fraud

Indeed, Sutherland has often homed in on issues around trust, rules, and financial misconduct, something students care about greatly.

“Students are always interested in talking about fraud,” Sutherland says. “Our financial system is based on trust. So many of us invest on an entirely anonymous basis — we don’t personally know our fund manager or closely watch what they do with our money.” And while regulations and a functioning justice system protect against problems, Sutherland notes, finance works partly because “people have some trust in the financial system. But that’s a fragile thing. Once people are swindled, they just keep their money in the bank or under the mattress. Often we’ll have students from countries with weak institutions or corruption, and they’ll say, ‘You would never do the things you can do in the U.S., in terms of investing your money.’ Without trust, it becomes harder for entrepreneurs to raise capital and undermines the whole vibrant economic system we have.”

Some measures can make a big difference. In a 2020 paper published in the Journal of Financial Economics, Sutherland and two co-authors found that a 2010 change to the investment adviser qualification exam, which reduced its focus on ethics, had significant effects: People who passed the exam when it featured more rules and ethics material are one-fourth less likely to commit misconduct. They are also more likely to depart employers during or even before scandals.

“It does seem to matter,” Sutherland says. “The person who has had less ethics training is more likely to get in trouble with the industry. You can predict future fraud in a firm by who is quitting. Those with more ethics training are more likely to leave before a scandal breaks.”

In the classroom

Sutherland also believes his interests are well-suited to the MIT Sloan School of Management, since many students are looking to found startups.

“One thing that really stands out about Sloan is that we attract a lot of entrepreneurs,” Sutherland says. “They’re curious about all this stuff: How do I get financing? Should I go to a bank? Should I raise equity? How do I compare myself to competitors? It’s striking to me that if that person wanted to work for a big public firm, I could hand them a textbook that answers many of these questions. But when it comes to private firms, a lot of that is unknown. And it motivates me to find answers.”

And while Sutherland is a prolific researcher, he views classroom time as being just as important. 

“What I hope with every project I work on is that I could take the findings to the classroom, and the students would find it relevant and interesting,” Sutherland says.

As much as Sutherland made a big departure from the family business, he still gets to teach, and in a sense perform for an audience. Ask Sutherland about his students, and he sounds an emphatically upbeat note.

“One of the best things about teaching at MIT,” Sutherland says, “is that the students are smart enough that you can explain how you did the study, and someone will put up a hand and say: ‘What about this, or that?’ You can bring research findings to the classroom and they absorb them and challenge you on them. It’s the best place in the world to teach, because the students are just so curious and so smart.”

MIT named No. 2 university by U.S. News for 2024-25

MIT has placed second in U.S. News and World Report’s annual rankings of the nation’s best colleges and universities, announced today. 

As in past years, MIT’s engineering program continues to lead the list of undergraduate engineering programs at a doctoral institution. The Institute also placed first in six out of nine engineering disciplines.

U.S. News placed MIT second in its evaluation of undergraduate computer science programs, along with Carnegie Mellon University and the University of California at Berkeley. The Institute placed first in four out of 10 computer science disciplines.

MIT remains the No. 2 undergraduate business program, a ranking it shares with UC Berkeley. Among business subfields, MIT is ranked first in three out of 10 specialties.

Within the magazine’s rankings of “academic programs to look for,” MIT topped the list in the category of undergraduate research and creative projects. The Institute also ranks as the third most innovative national university and the third best value, according to the U.S. News peer assessment survey of top academics.

MIT placed first in six engineering specialties: aerospace/aeronautical/astronautical engineering; chemical engineering; computer engineering; electrical/electronic/communication engineering; materials engineering; and mechanical engineering. It placed within the top five in two other engineering areas: biomedical engineering and civil engineering.

Other schools in the top five overall for undergraduate engineering programs are Stanford University, UC Berkeley, Georgia Tech, Caltech, the University of Illinois at Urbana-Champaign, and the University of Michigan at Ann Arbor.

In computer science, MIT placed first in four specialties: biocomputing/bioinformatics/biotechnology; computer systems; programming languages; and theory. It placed in the top five of five other disciplines: artificial intelligence; cybersecurity; data analytics/science; mobile/web applications; and software engineering.

The No. 1-ranked undergraduate computer science program overall is at Stanford. Other schools in the top five overall for undergraduate computer science programs are Carnegie Mellon, Stanford, UC Berkeley, Princeton University, and the University of Illinois at Urbana-Champaign.

Among undergraduate business specialties, the MIT Sloan School of Management leads in analytics; production/operations management; and quantitative analysis. It also placed within the top five in three other categories: entrepreneurship; management information systems; and supply chain management/logistics.

The No. 1-ranked undergraduate business program overall is at the University of Pennsylvania; other schools ranking in the top five include UC Berkeley, the University of Michigan at Ann Arbor, and New York University.

Accelerating particle size distribution estimation

The pharmaceutical manufacturing industry has long struggled with the issue of monitoring the characteristics of a drying mixture, a critical step in producing medication and chemical compounds. At present, there are two noninvasive characterization approaches that are typically used: A sample is either imaged and individual particles are counted, or researchers use a scattered light to estimate the particle size distribution (PSD). The former is time-intensive and leads to increased waste, making the latter a more attractive option.

In recent years, MIT engineers and researchers developed a physics and machine learning-based scattered light approach that has been shown to improve manufacturing processes for pharmaceutical pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of products. A new open-access paper, “Non-invasive estimation of the powder size distribution from a single speckle image,” available in the journal Light: Science & Application, expands on this work, introducing an even faster approach. 

“Understanding the behavior of scattered light is one of the most important topics in optics,” says Qihang Zhang PhD ’23, an associate researcher at Tsinghua University. “By making progress in analyzing scattered light, we also invented a useful tool for the pharmaceutical industry. Locating the pain point and solving it by investigating the fundamental rule is the most exciting thing to the research team.”

The paper proposes a new PSD estimation method, based on pupil engineering, that reduces the number of frames needed for analysis. “Our learning-based model can estimate the powder size distribution from a single snapshot speckle image, consequently reducing the reconstruction time from 15 seconds to a mere 0.25 seconds,” the researchers explain.

“Our main contribution in this work is accelerating a particle size detection method by 60 times, with a collective optimization of both algorithm and hardware,” says Zhang. “This high-speed probe is capable to detect the size evolution in fast dynamical systems, providing a platform to study models of processes in pharmaceutical industry including drying, mixing and blending.”

The technique offers a low-cost, noninvasive particle size probe by collecting back-scattered light from powder surfaces. The compact and portable prototype is compatible with most of drying systems in the market, as long as there is an observation window. This online measurement approach may help control manufacturing processes, improving efficiency and product quality. Further, the previous lack of online monitoring prevented systematical study of dynamical models in manufacturing processes. This probe could bring a new platform to carry out series research and modeling for the particle size evolution.

This work, a successful collaboration between physicists and engineers, is generated from the MIT-Takeda program. Collaborators are affiliated with three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. George Barbastathis, professor of mechanical engineering at MIT, is the article’s senior author.

A two-dose schedule could make HIV vaccines more effective

One major reason why it has been difficult to develop an effective HIV vaccine is that the virus mutates very rapidly, allowing it to evade the antibody response generated by vaccines.

Several years ago, MIT researchers showed that administering a series of escalating doses of an HIV vaccine over a two-week period could help overcome a part of that challenge by generating larger quantities of neutralizing antibodies. However, a multidose vaccine regimen administered over a short time is not practical for mass vaccination campaigns.

In a new study, the researchers have now found that they can achieve a similar immune response with just two doses, given one week apart. The first dose, which is much smaller, prepares the immune system to respond more powerfully to the second, larger dose.

This study, which was performed by bringing together computational modeling and experiments in mice, used an HIV envelope protein as the vaccine. A single-dose version of this vaccine is now in clinical trials, and the researchers hope to establish another study group that will receive the vaccine on a two-dose schedule.

“By bringing together the physical and life sciences, we shed light on some basic immunological questions that helped develop this two-dose schedule to mimic the multiple-dose regimen,” says Arup Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.

This approach may also generalize to vaccines for other diseases, Chakraborty notes.

Chakraborty and Darrell Irvine, a former MIT professor of biological engineering and materials science and engineering and member of the Koch Institute for Integrative Cancer Research, who is now a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Immunology. The lead authors of the paper are Sachin Bhagchandani PhD ’23 and Leerang Yang PhD ’24.

Neutralizing antibodies

Each year, HIV infects more than 1 million people around the world, and some of those people do not have access to antiviral drugs. An effective vaccine could prevent many of those infections. One promising vaccine now in clinical trials consists of an HIV protein called an envelope trimer, along with a nanoparticle called SMNP. The nanoparticle, developed by Irvine’s lab, acts as an adjuvant that helps recruit a stronger B cell response to the vaccine.

In clinical trials, this vaccine and other experimental vaccines have been given as just one dose. However, there is growing evidence that a series of doses is more effective at generating broadly neutralizing antibodies. The seven-dose regimen, the researchers believe, works well because it mimics what happens when the body is exposed to a virus: The immune system builds up a strong response as more viral proteins, or antigens, accumulate in the body.

In the new study, the MIT team investigated how this response develops and explored whether they could achieve the same effect using a smaller number of vaccine doses.

“Giving seven doses just isn’t feasible for mass vaccination,” Bhagchandani says. “We wanted to identify some of the critical elements necessary for the success of this escalating dose, and to explore whether that knowledge could allow us to reduce the number of doses.”

The researchers began by comparing the effects of one, two, three, four, five, six, or seven doses, all given over a 12-day period. They initially found that while three or more doses generated strong antibody responses, two doses did not. However, by tweaking the dose intervals and ratios, the researchers discovered that giving 20 percent of the vaccine in the first dose and 80 percent in a second dose, seven days later, achieved just as good a response as the seven-dose schedule.

“It was clear that understanding the mechanisms behind this phenomenon would be crucial for future clinical translation,” Yang says. “Even if the ideal dosing ratio and timing may differ for humans, the underlying mechanistic principles will likely remain the same.”

Using a computational model, the researchers explored what was happening in each of these dosing scenarios. This work showed that when all of the vaccine is given as one dose, most of the antigen gets chopped into fragments before it reaches the lymph nodes. Lymph nodes are where B cells become activated to target a particular antigen, within structures known as germinal centers.

When only a tiny amount of the intact antigen reaches these germinal centers, B cells can’t come up with a strong response against that antigen.

However, a very small number of B cells do arise that produce antibodies targeting the intact antigen. So, giving a small amount in the first dose does not “waste” much antigen but allows some B cells and antibodies to develop. If a second, larger dose is given a week later, those antibodies bind to the antigen before it can be broken down and escort it into the lymph node. This allows more B cells to be exposed to that antigen and eventually leads to a large population of B cells that can target it.

“The early doses generate some small amounts of antibody, and that’s enough to then bind to the vaccine of the later doses, protect it, and target it to the lymph node. That’s how we realized that we don’t need to give seven doses,” Bhagchandani says. “A small initial dose will generate this antibody and then when you give the larger dose, it can again be protected because that antibody will bind to it and traffic it to the lymph node.”

T-cell boost

Those antigens may stay in the germinal centers for weeks or even longer, allowing more B cells to come in and be exposed to them, making it more likely that diverse types of antibodies will develop.

The researchers also found that the two-dose schedule induces a stronger T-cell response. The first dose activates dendritic cells, which promote inflammation and T-cell activation. Then, when the second dose arrives, even more dendritic cells are stimulated, further boosting the T-cell response.

Overall, the two-dose regimen resulted in a fivefold improvement in the T-cell response and a 60-fold improvement in the antibody response, compared to a single vaccine dose.

“Reducing the ‘escalating dose’ strategy down to two shots makes it much more practical for clinical implementation. Further, a number of technologies are in development that could mimic the two-dose exposure in a single shot, which could become ideal for mass vaccination campaigns,” Irvine says.

The researchers are now studying this vaccine strategy in a nonhuman primate model. They are also working on specialized materials that can deliver the second dose over an extended period of time, which could further enhance the immune response.

The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the National Institutes of Health, and the Ragon Institute of MIT, MGH, and Harvard.

Engineers 3D print sturdy glass bricks for building structures

What if construction materials could be put together and taken apart as easily as LEGO bricks? Such reconfigurable masonry would be disassembled at the end of a building’s lifetime and reassembled into a new structure, in a sustainable cycle that could supply generations of buildings using the same physical building blocks.

That’s the idea behind circular construction, which aims to reuse and repurpose a building’s materials whenever possible, to minimize the manufacturing of new materials and reduce the construction industry’s “embodied carbon,” which refers to the greenhouse gas emissions associated with every process throughout a building’s construction, from manufacturing to demolition.

Now MIT engineers, motivated by circular construction’s eco potential, are developing a new kind of reconfigurable masonry made from 3D-printed, recycled glass. Using a custom 3D glass printing technology provided by MIT spinoff Evenline, the team has made strong, multilayered glass bricks, each in the shape of a figure eight, that are designed to interlock, much like LEGO bricks.

In mechanical testing, a single glass brick withstood pressures similar to that of a concrete block. As a structural demonstration, the researchers constructed a wall of interlocking glass bricks. They envision that 3D-printable glass masonry could be reused many times over as recyclable bricks for building facades and internal walls.

Video thumbnail

Play video

Video: Courtesy of Evenline

“Glass is a highly recyclable material,” says Kaitlyn Becker, assistant professor of mechanical engineering at MIT. “We’re taking glass and turning it into masonry that, at the end of a structure’s life, can be disassembled and reassembled into a new structure, or can be stuck back into the printer and turned into a completely different shape. All this builds into our idea of a sustainable, circular building material.”

“Glass as a structural material kind of breaks people’s brains a little bit,” says Michael Stern, a former MIT graduate student and researcher in both MIT’s Media Lab and Lincoln Laboratory, who is also founder and director of Evenline. “We’re showing this is an opportunity to push the limits of what’s been done in architecture.”

Becker and Stern, with their colleagues, detail their glass brick design in a study appearing today in the journal Glass Structures and Engineering. Their MIT co-authors include lead author Daniel Massimino and Charlotte Folinus, along with Ethan Townsend at Evenline.

Lock step

The inspiration for the new circular masonry design arose partly in MIT’s Glass Lab, where Becker and Stern, then undergraduate students, first learned the art and science of blowing glass.

“I found the material fascinating,” says Stern, who later designed a 3D printer capable of printing molten recycled glass — a project he took on while studying in the mechanical engineering department. “I started thinking of how glass printing can find its place and do interesting things, construction being one possible route.”

Meanwhile, Becker, who accepted a faculty position at MIT, began exploring the intersection of manufacturing and design, and ways to develop new processes that enable innovative designs.

“I get excited about expanding design and manfucaturing spaces for challenging materials with interesting characteristics, like glass and its optical properties and recyclability,” Becker says. “As long as it’s not contaminated, you can recycle glass almost infinitely.”

She and Stern teamed up to see whether and how 3D-printable glass could be made into a structural masonry unit as sturdy and stackable as traditional bricks. For their new study, the team used the Glass 3D Printer 3 (G3DP3), the latest version of Evenline’s glass printer, which pairs with a furnace to melt crushed glass bottles into a molten, printable form that the printer then deposits in layered patterns.

The team printed prototype glass bricks using soda-lime glass that is typically used in a glassblowing studio. They incorporated two round pegs onto each printed brick, similar to the studs on a LEGO brick. Like the toy blocks, the pegs enable bricks to interlock and assemble into larger structures. Another material placed between the bricks prevent scratches or cracks between glass surfaces but can be removed if a brick structure were to be dismantled and recycled, also allowing bricks to be remelted in the printer and formed into new shapes. The team decided to make the blocks into a figure-eight shape.

“With the figure-eight shape, we can constrain the bricks while also assembling them into walls that have some curvature,” Massimino says.

Stepping stones

The team printed glass bricks and tested their mechanical strength in an industrial hydraulic press that squeezed the bricks until they began to fracture. The researchers found that the strongest bricks were able to hold up to pressures that are comparable to what concrete blocks can withstand. Those strongest bricks were made mostly from printed glass, with a separately manufactured interlocking feature that attached to the bottom of the brick. These results suggest that most of a masonry brick could be made from printed glass, with an interlocking feature that could be printed, cast, or separately manufactured from a different material.

“Glass is a complicated material to work with,” Becker says. “The interlocking elements, made from a different material, showed the most promise at this stage.”

The group is looking into whether more of a brick’s interlocking feature could be made from printed glass, but doesn’t see this as a dealbreaker in moving forward to scale up the design. To demonstrate glass masonry’s potential, they constructed a curved wall of interlocking glass bricks. Next, they aim to build progressively bigger, self-supporting glass structures.

“We have more understanding of what the material’s limits are, and how to scale,” Stern says. “We’re thinking of stepping stones to buildings, and want to start with something like a pavilion — a temporary structure that humans can interact with, and that you could then reconfigure into a second design. And you could imagine that these blocks could go through a lot of lives.”

This research was supported, in part, by the Bose Research Grant Program and MIT’s Research Support Committee.