Christie Mealo discusses AI in health, her AI networking app, and her role in Philly’s tech scene, highlighting generative AI’s impact….
10 Best AI Tools for Supply Chain Management (September 2024)
As we are well aware by now, artificial intelligence has emerged as a game-changing force in supply chain management. As organizations strive to navigate increasingly complex global networks, AI-powered solutions are providing unprecedented levels of visibility, optimization, and predictive capabilities. This article explores some of the…
Startup helps people fall asleep by aligning audio signals with brainwaves
Do you ever toss and turn in bed after a long day, wishing you could just program your brain to turn off and get some sleep?
That may sound like science fiction, but that’s the goal of the startup Elemind, which is using an electroencephalogram (EEG) headband that emits acoustic stimulation aligned with people’s brainwaves to move them into a sleep state more quickly.
In a small study of adults with sleep onset insomnia, 30 minutes of stimulation from the device decreased the time it took them to fall asleep by 10 to 15 minutes. This summer, Elemind began shipping its product to a small group of users as part of an early pilot program.
The company, which was founded by MIT Professor Ed Boyden ’99, MNG ’99; David Wang ’05, SM ’10, PhD ’15; former postdoc Nir Grossman; former Media Lab research affiliate Heather Read; and Meredith Perry, plans to collect feedback from early users before making the device more widely available.
Elemind’s team believes their device offers several advantages over sleeping pills that can cause side effects and addiction.
“We wanted to create a nonchemical option for people who wanted to get great sleep without side effects, so you could get all the benefits of natural sleep without the risks,” says Perry, Elemind’s CEO. “There’s a number of people that we think would benefit from this device, whether you’re a breastfeeding mom that might not want to take a sleep drug, somebody traveling across time zones that wants to fight jet lag, or someone that simply wants to improve your next-day performance and feel like you have more control over your sleep.”
From research to product
Wang’s academic journey at MIT spanned nearly 15 years, during which he earned four degrees, culminating in a PhD in artificial intelligence in 2015. In 2014, Wang was co-teaching a class with Grossman when they began working together to noninvasively measure real-time biological oscillations in the brain and body. Through that work, they became fascinated with a technique for modulating the brain known as phase-locked stimulation, which uses precisely timed visual, physical, or auditory stimulation that lines up with brain activity.
“You’re measuring some kind of changing variable, and then you want to change your stimulus in real time in response to that variable,” explains Boyden, who pointed Wang and Grossman to a set of mathematical techniques that became some of the core intellectual property of Elemind.
Phase-locked stimulation has been used in conjunction with electrodes implanted in the brain to disrupt seizures and tremors for years. But in 2021, Wang, Grossman, Boyden, and their collaborators published a paper showing they could use electrical stimulation from outside the skull to suppress essential tremor syndrome, the most common adult movement disorder.
The results were promising, but the founders decided to start by proving their approach worked in a less regulated space: sleep. They developed a system to deliver auditory pulses timed to promote or suppress alpha oscillations in the brain, which are elevated in insomnia.
That kicked off a years-long product development process that led to the headband device Elemind uses today. The headband measures brainwaves through EEG and feeds the results into Elemind’s proprietary algorithms, which are used to dynamically generate audio through a bone conduction driver. The moment the device detects that someone is asleep, the audio is slowly tapered out.
“We have a theory that the sound that we play triggers an auditory-evoked response in the brain,” Wang says. “That means we get your auditory cortex to basically release this voltage burst that sweeps across your brain and interferes with other regions. Some people who have worn Elemind call it a brain jammer. For folks that ruminate a lot before they go to sleep, their brains are actively running. This encourages their brain to quiet down.”
Beyond sleep
Elemind has established a collaboration with eight universities that allows researchers to explore the effectiveness of the company’s approach in a range of use cases, from tremors to memory formation, Alzheimer’s progression, and more.
“We’re not only developing this product, but also advancing the field of neuroscience by collecting high-resolution data to hopefully also help others conduct new research,” Wang says.
The collaborations have led to some exciting results. Researchers at McGill University found that using Elemind’s acoustic stimulation during sleep increased activity in areas of the cortex related to motor function and improved healthy adults’ performance in memory tasks. Other studies have shown the approach can be used to reduce essential tremors in patients and enhance sedation recovery.
Elemind is focused on its sleep application for now, but the company plans to develop other solutions, from medical interventions to memory and focus augmentation, as the science evolves.
“The vision is how do we move beyond sleep into what could ultimately become like an app store for the brain, where you can download a brain state like you download an app?” Perry says. “How can we make this a tool that can be applied to a bunch of different applications with a single piece of hardware that has a lot of different stimulation protocols?”
Color Mixing With Animation Composition
Mixing colors in CSS is pretty much a solved deal, thanks to the more recent color-mix() function as it gains support. Pass in two color values — any two color values at all — and optionally set the proportions.
background-color:
…
Color Mixing With Animation Composition originally published…
Study evaluates impacts of summer heat in U.S. prison environments
When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.
A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.
The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.
Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat.
The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.
“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”
The study’s co-authors include Danielle Wood, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.
Environmental intersections
The new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.
The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.
“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”
She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.
The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.
They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.
The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.
“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”
Heat factors
From their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.
“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.
While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.
“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”
The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.
“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”
This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism.
Vladislav Tankov, Department Lead at JetBrains AI – Interview Series
Vladislav Tankov is a Director of AI, leading the development of JetBrains AI and Grazie products, responsible for AI Assistant in JetBrains IDEs. JetBrains is a global software company specializing in the creation of intelligent, productivity-enhancing tools for software developers and teams. Can you provide an overview…
Deploying AI at Scale: How NVIDIA NIM and LangChain are Revolutionizing AI Integration and Performance
Artificial Intelligence (AI) has moved from a futuristic idea to a powerful force changing industries worldwide. AI-driven solutions are transforming how businesses operate in sectors like healthcare, finance, manufacturing, and retail. They are not only improving efficiency and accuracy but also enhancing decision-making. The growing value…
Fifteen Lincoln Laboratory technologies receive 2024 R&D 100 Awards
Fifteen technologies developed either wholly or in part by MIT Lincoln Laboratory have been named recipients of 2024 R&D 100 Awards. The awards are given by R&D World, an online publication that serves research scientists and engineers worldwide. Dubbed the “Oscars of Innovation,” the awards recognize the 100 most significant technologies transitioned to use or introduced into the marketplace in the past year. An independent panel of expert judges selects the winners.
“The R&D 100 Awards are a significant recognition of the laboratory’s technical capabilities and its role in transitioning technology for real-world impact,” says Melissa Choi, director of Lincoln Laboratory. “It is exciting to see so many projects selected for this honor, and we are proud of everyone whose creativity, curiosity, and technical excellence made these and many other Lincoln Laboratory innovations possible.”
The awarded technologies have a wide range of applications. A handful of them are poised to prevent human harm — for example, by monitoring for heat stroke or cognitive injury. Others present new processes for 3D printing glass, fabricating silicon imaging sensors, and interconnecting integrated circuits. Some technologies take on long-held challenges, such as mapping the human brain and the ocean floor. Together, the winners exemplify the creativity and breadth of Lincoln Laboratory innovation. Since 2010, the laboratory has received 101 R&D 100 Awards.
This year’s R&D 100 Award–winning technologies are described below.
Protecting human health and safety
The Neuron Tracing and Active Learning Environment (NeuroTrALE) software uses artificial intelligence techniques to create high-resolution maps, or atlases, of the brain’s network of neurons from high-dimensional biomedical data. NeuroTrALE addresses a major challenge in AI-assisted brain mapping: a lack of labeled data for training AI systems to build atlases essential for study of the brain’s neural structures and mechanisms. The software is the first end-to-end system to perform processing and annotation of dense microscopy data; generate segmentations of neurons; and enable experts to review, correct, and edit NeuroTrALE’s annotations from a web browser. This award is shared with the lab of Kwanghun (KC) Chung, associate professor in MIT’s Department of Chemical Engineering, Institute for Medical Engineering and Science, and Picower Institute for Learning and Memory.
Many military and law enforcement personnel are routinely exposed to low-level blasts in training settings. Often, these blasts don’t cause immediate diagnosable injury, but exposure over time has been linked to anxiety, depression, and other cognitive conditions. The Electrooculography and Balance Blast Overpressure Monitoring (EYEBOOM) is a wearable system developed to monitor individuals’ blast exposure and notify them if they are at an increased risk of harm. It uses two body-worn sensors, one to capture continuous eye and body movements and another to measure blast energy. An algorithm analyzes these data to detect subtle changes in physiology, which, when combined with cumulative blast exposure, can be predictive of cognitive injury. Today, the system is in use by select U.S. Special Forces units. The laboratory co-developed EYEBOOM with Creare LLC and Lifelens LLC.
Tunable knitted stem cell scaffolds: The development of artificial-tissue constructs that mimic the natural stretchability and toughness of living tissue is in high demand for regenerative medicine applications. A team from Lincoln Laboratory and the MIT Department of Mechanical Engineering developed new forms of biocompatible fabrics that mimic the mechanical properties of native tissues while nurturing growing stem cells. These wearable stem-cell scaffolds can expedite the regeneration of skin, muscle, and other soft tissues to reduce recovery time and limit complications from severe burns, lacerations, and other bodily wounds.
Mixture deconvolution pipeline for forensic investigative genetic genealogy: A rapidly growing field of forensic science is investigative genetic genealogy, wherein investigators submit a DNA profile to commercial genealogy databases to identify a missing person or criminal suspect. Lincoln Laboratory’s software invention addresses a large unmet need in this field: the ability to deconvolve, or unravel, mixed DNA profiles of multiple unknown persons to enable database searching. The software pipeline estimates the number of contributors in a DNA mixture, the percentage of DNA present from each contributor, and the sex of each contributor; then, it deconvolves the different DNA profiles in the mixture to isolate two contributors, without needing to match them to a reference profile of a known contributor, as required by previous software.
Each year, hundreds of people die or suffer serious injuries from heat stroke, especially personnel in high-risk outdoor occupations such as military, construction, or first response. The Heat Injury Prevention System (HIPS) provides accurate, early warning of heat stroke several minutes in advance of visible symptoms. The system collects data from a sensor worn on a chest strap and employs algorithms for estimating body temperature, gait instability, and adaptive physiological strain index. The system then provides an individual’s heat-injury prediction on a mobile app. The affordability, accuracy, and user-acceptability of HIPS have led to its integration into operational environments for the military.
Observing the world
More than 80 percent of the ocean floor remains virtually unmapped and unexplored. Historically, deep sea maps have been generated either at low resolution from a large sonar array mounted on a ship, or at higher resolution with slow and expensive underwater vehicles. New autonomous sparse-aperture multibeam echo sounder technology uses a swarm of about 20 autonomous surface vehicles that work together as a single large sonar array to achieve the best of both worlds: mapping the deep seabed at 100 times the resolution of a ship-mounted sonar and 50 times the coverage rate of an underwater vehicle. New estimation algorithms and acoustic signal processing techniques enable this technology. The system holds potential for significantly improving humanitarian search-and-rescue capabilities and ocean and climate modeling. The R&D 100 Award is shared with the MIT Department of Mechanical Engineering.
FocusNet is a machine-learning architecture for analyzing airborne ground-mapping lidar data. Airborne lidar works by scanning the ground with a laser and creating a digital 3D representation of the area, called a point cloud. Humans or algorithms then analyze the point cloud to categorize scene features such as buildings or roads. In recent years, lidar technology has both improved and diversified, and methods to analyze the data have struggled to keep up. FocusNet fills this gap by using a convolutional neural network — an algorithm that finds patterns in images to recognize objects — to automatically categorize objects within the point cloud. It can achieve this object recognition across different types of lidar system data without needing to be retrained, representing a major advancement in understanding 3D lidar scenes.
Atmospheric observations collected from aircraft, such as temperature and wind, provide the highest-value inputs to weather forecasting models. However, these data collections are sparse and delayed, currently obtained through specialized systems installed on select aircraft. The Portable Aircraft Derived Weather Observation System (PADWOS) offers a way to significantly expand the quality and quantity of these data by leveraging Mode S Enhanced Surveillance (EHS) transponders, which are already installed on more than 95 percent of commercial aircraft and the majority of general aviation aircraft. From the ground, PADWOS interrogates Mode S EHS–equipped aircraft, collecting in milliseconds aircraft state data reported by the transponder to make wind and temperature estimates. The system holds promise for improving forecasts, monitoring climate, and supporting other weather applications.
Advancing computing and communications
Quantum networking has the potential to revolutionize connectivity across the globe, unlocking unprecedented capabilities in computing, sensing, and communications. To realize this potential, entangled photons distributed across a quantum network must arrive and interact with other photons in precisely controlled ways. Lincoln Laboratory’s precision photon synchronization system for quantum networking is the first to provide an efficient solution to synchronize space-to-ground quantum networking links to sub-picosecond precision. Unlike other technologies, the system performs free-space quantum entanglement distribution via a satellite, without needing to locate complex entanglement sources in space. These sources are instead located on the ground, providing an easily accessible test environment that can be upgraded as new quantum entanglement generation technologies emerge.
Superconductive many-state memory and comparison logic: Lincoln Laboratory developed circuits that natively store and compare greater than two discrete states, utilizing the quantized magnetic fields of superconductive materials. This property allows the creation of digital logic circuitry that goes beyond binary logic to ternary logic, improving memory throughput without significantly increasing the number of devices required or the surface area of the circuits. Comparing their superconducting ternary-logic memory to a conventional memory, the research team found that the ternary memory could pattern match across the entire digital Library of Congress nearly 30 times faster. The circuits represent fundamental building blocks for advanced, ultrahigh-speed and low-power digital logic.
The Megachip is an approach to interconnect many small, specialized chips (called chiplets) into a single-chip-like monolithic integrated circuit. Capable of incorporating billions of transistors, this interconnected structure extends device performance beyond the limits imposed by traditional wafer-level packaging. Megachips can address the increasing size and performance demands made on microelectronics used for AI processing and high-performance computing, and in mobile devices and servers.
An in-band full-duplex (IBDF) wireless system with advanced interference mitigation addresses the growing congestion of wireless networks. Previous IBFD systems have demonstrated the ability for a wireless device to transmit and receive on the same frequency at the same time by suppressing self-interference, effectively doubling the device’s efficiency on the frequency spectrum. These systems, however, haven’t addressed interference from external wireless sources on the same frequency. Lincoln Laboratory’s technology, for the first time, allows IBFD to mitigate multiple interference sources, resulting in a wireless system that can increase the number of devices supported, their data rate, and their communications range. This IBFD system could enable future smart vehicles to simultaneously connect to wireless networks, share road information, and self-drive — a capability not possible today.
Fabricating with novel processes
Lincoln Laboratory developed a nanocomposite ink system for 3D printing functional materials. Deposition using an active-mixing nozzle allows the generation of graded structures that transition gradually from one material to another. This ability to control the electromagnetic and geometric properties of a material can enable smaller, lighter, and less-power-hungry RF components while accommodating large frequency bandwidths. Furthermore, introducing different particles into the ink in a modular fashion allows the absorption of a wide range of radiation types. This 3D-printed shielding is expected to be used for protecting electronics in small satellites. This award is shared with Professor Jennifer Lewis’ research group at Harvard University.
The laboratory’s engineered substrates for rapid advanced imaging sensor development dramatically reduce the time and cost of developing advanced silicon imaging sensors. These substrates prebuild most steps of the back-illumination process (a method to increase the amount of light that hits a pixel) directly into the starting wafer, before device fabrication begins. Then, a specialized process allows the detector substrate and readout circuits to be mated together and uniformly thinned to microns in thickness at the die level rather than at the wafer level. Both aspects can save a project millions of dollars in fabrication costs by enabling the production of small batches of detectors, instead of a full wafer run, while improving sensor noise and performance. This platform has allowed researchers to prototype new imaging sensor concepts — including detectors for future NASA autonomous lander missions — that would have taken years to develop in a traditional process.
Additive manufacturing, or 3D printing, holds promise for fabricating complex glass structures that would be unattainable with traditional glass manufacturing techniques. Lincoln Laboratory’s low-temperature additive manufacturing of glass composites allows 3D printing of multimaterial glass items without the need for costly high-temperature processing. This low-temperature technique, which cures the glass at 250 degrees Celsius as compared to the standard 1,000 C, relies on simple components: a liquid silicate solution, a structural filler, a fumed nanoparticle, and an optional functional additive to produce glass with optical, electrical, or chemical properties. The technique could facilitate the widespread adoption of 3D printing for glass devices such as microfluidic systems, free-form optical lenses or fiber, and high-temperature electronic components.
The researchers behind each R&D 100 Award–winning technology will be honored at an awards gala on Nov. 21 in Palm Springs, California.
Research quantifying “nociception” could help improve management of surgical pain
The degree to which a surgical patient’s subconscious processing of pain, or “nociception,” is properly managed by their anesthesiologist will directly affect the degree of post-operative drug side effects they’ll experience and the need for further pain management they’ll require. But pain is a subjective feeling to measure, even when patients are awake, much less when they are unconscious.
In a new study appearing in the Proceedings of the National Academy of Sciences, MIT and Massachusetts General Hospital (MGH) researchers describe a set of statistical models that objectively quantified nociception during surgery. Ultimately, they hope to help anesthesiologists optimize drug dose and minimize post-operative pain and side effects.
The new models integrate data meticulously logged over 18,582 minutes of 101 abdominal surgeries in men and women at MGH. Led by Sandya Subramanian PhD ’21, an assistant professor at the University of California at Berkeley and the University of California at San Francisco, the researchers collected and analyzed data from five physiological sensors as patients experienced a total of 49,878 distinct “nociceptive stimuli” (such as incisions or cautery). Moreover, the team recorded what drugs were administered, and how much and when, to factor in their effects on nociception or cardiovascular measures. They then used all the data to develop a set of statistical models that performed well in retrospectively indicating the body’s response to nociceptive stimuli.
The team’s goal is to furnish such accurate, objective, and physiologically principled information in real time to anesthesiologists who currently have to rely heavily on intuition and past experience in deciding how to administer pain-control drugs during surgery. If anesthesiologists give too much, patients can experience side effects ranging from nausea to delirium. If they give too little, patients may feel excessive pain after they awaken.
“Sandya’s work has helped us establish a principled way to understand and measure nociception (unconscious pain) during general anesthesia,” says study senior author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. Brown is also an anesthesiologist at MGH and a professor at Harvard Medical School. “Our next objective is to make the insights that we have gained from Sandya’s studies reliable and practical for anesthesiologists to use during surgery.”
Surgery and statistics
The research began as Subramanian’s doctoral thesis project in Brown’s lab in 2017. The best prior attempts to objectively model nociception have either relied solely on the electrocardiogram (ECG, an indirect indicator of heart-rate variability) or other systems that may incorporate more than one measurement, but were either based on lab experiments using pain stimuli that do not compare in intensity to surgical pain or were validated by statistically aggregating just a few time points across multiple patients’ surgeries, Subramanian says.
“There’s no other place to study surgical pain except for the operating room,” Subramanian says. “We wanted to not only develop the algorithms using data from surgery, but also actually validate it in the context in which we want someone to use it. If we are asking them to track moment-to-moment nociception during an individual surgery, we need to validate it in that same way.”
So she and Brown worked to advance the state of the art by collecting multi-sensor data during the whole course of actual surgeries and by accounting for the confounding effects of the drugs administered. In that way, they hoped to develop a model that could make accurate predictions that remained valid for the same patient all the way through their operation.
Part of the improvements the team achieved arose from tracking patterns of heart rate and also skin conductance. Changes in both of these physiological factors can be indications of the body’s primal “fight or flight” response to nociception or pain, but some drugs used during surgery directly affect cardiovascular state, while skin conductance (or “EDA,” electrodermal activity) remains unaffected. The study measures not only ECG but also backs it up with PPG, an optical measure of heart rate (like the oxygen sensor on a smartwatch), because ECG signals can sometimes be made noisy by all the electrical equipment buzzing away in the operating room. Similarly, Subramanian backstopped EDA measures with measures of skin temperature to ensure that changes in skin conductance from sweat were because of nociception and not simply the patient being too warm. The study also tracked respiration.
Then the authors performed statistical analyses to develop physiologically relevant indices from each of the cardiovascular and skin conductance signals. And once each index was established, further statistical analysis enabled tracking the indices together to produce models that could make accurate, principled predictions of when nociception was occurring and the body’s response.
Nailing nociception
In four versions of the model, Subramanian “supervised” them by feeding them information on when actual nociceptive stimuli occurred so that they could then learn the association between the physiological measurements and the incidence of pain-inducing events. In some of these trained versions she left out drug information and in some versions she used different statistical approaches (either “linear regression” or “random forest”). In a fifth version of the model, based on a “state space” approach, she left it unsupervised, meaning it had to learn to infer moments of nociception purely from the physiological indices. She compared all five versions of her model to one of the current industry standards, an ECG-tracking model called ANI.
Each model’s output can be visualized as a graph plotting the predicted degree of nociception over time. ANI performs just above chance but is implemented in real-time. The unsupervised model performed better than ANI, though not quite as well as the supervised models. The best performing of those was one that incorporated drug information and used a “random forest” approach. Still, the authors note, the fact that the unsupervised model performed significantly better than chance suggests that there is indeed an objectively detectable signature of the body’s nociceptive state even when looking across different patients.
“A state space framework using multisensory physiological observations is effective in uncovering this implicit nociceptive state with a consistent definition across multiple subjects,” wrote Subramanian, Brown, and their co-authors. “This is an important step toward defining a metric to track nociception without including nociceptive ‘ground truth’ information, most practical for scalability and implementation in clinical settings.”
Indeed, the next steps for the research are to increase the data sampling and to further refine the models so that they can eventually be put into practice in the operating room. That will require enabling them to predict nociception in real time, rather than in post-hoc analysis. When that advance is made, that will enable anesthesiologists or intensivists to inform their pain drug dosing judgements. Further into the future, the model could inform closed-loop systems that automatically dose drugs under the anesthesiologist’s supervision.
“Our study is an important first step toward developing objective markers to track surgical nociception,” the authors concluded. “These markers will enable objective assessment of nociception in other complex clinical settings, such as the ICU [intensive care unit], as well as catalyze future development of closed-loop control systems for nociception.”
In addition to Subramanian and Brown, the paper’s other authors are Bryan Tseng, Marcela del Carmen, Annekathryn Goodman, Douglas Dahl, and Riccardo Barbieri.
Funding from The JPB Foundation; The Picower Institute; George J. Elbaum ’59, SM ’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; Cathy and Lou Paglia; annual donors to the Anesthesia Initiative Fund; the National Science Foundation; and an MIT Office of Graduate Education Collabmore-Rogers Fellowship supported the research.