An implantable sensor could reverse opioid overdoses

In 2023, more than 100,000 Americans died from opioid overdoses. The most effective way to save someone who has overdosed is to administer a drug called naloxone, but a first responder or bystander can’t always reach the person who has overdosed in time.

Researchers at MIT and Brigham and Women’s Hospital have developed a new device that they hope will help to eliminate those delays and potentially save the lives of people who overdose. The device, about the size of a stick of gum, can be implanted under the skin, where it monitors heart rate, breathing rate, and other vital signs. When it determines that an overdose has occurred, it rapidly pumps out a dose of naloxone.

In a study appearing today in the journal Device, the researchers showed that the device can successfully reverse overdoses in animals. With further development, the researchers envision that this approach could provide a new option for helping to prevent overdose deaths in high-risk populations, such as people who have already survived an overdose.

“This could really address a significant unmet need in the population that suffers from substance abuse and opiate dependency to help mitigate overdoses, with the initial focus on the high-risk population,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The paper’s lead authors are Hen-Wei Huang, a former MIT visiting scientist and currently an assistant professor of electrical and electronic engineering at Nanyang Technological University in Singapore; Peter Chai, an associate professor of emergency medicine physician at Brigham and Women’s Hospital; SeungHo Lee, a research scientist at MIT’s Koch Institute for Integrative Cancer Research; Tom Kerssemakers and Ali Imani, former master’s students at Brigham and Women’s Hospital; and Jack Chen, a doctoral student in mechanical engineering at MIT.

An implantable device

Naloxone is an opioid antagonist, meaning that it can bind to opioid receptors and block the effects of other opioids, including heroin and fentanyl. The drug, which is given by injection or as a nasal spray, can restore normal breathing within just a few minutes of being administered.

However, many people are alone when they overdose, and may not receive assistance in time to save their lives. Additionally, with a new wave of synthetic, more potent opioids sweeping the U.S., opioid overdoses can be more rapid in onset and unpredictable. To try to overcome that, some researchers are developing wearable devices that could detect an overdose and administer naloxone, but none of those have yet proven successful. The MIT/BWH team set out to design an implantable device that would be less bulky, provide direct injection of naloxone into the subcutaneous tissue, and eliminate the need for the patient to remember to wear it.

The device that the researchers came up with includes sensors that can detect heart rate, breathing rate, blood pressure, and oxygen saturation. In an animal study, the researchers used the sensors to measure all of these signals and determine exactly how they change during an overdose of fentanyl. This resulted in a unique algorithm that increases the sensitivity of the device to accurately detect opioid overdose and distinguish it from other conditions where breathing is decreased, such as sleep apnea.

This study showed that fentanyl first leads to a drop in heart rate, followed quickly by a slowdown of breathing. By measuring how these signals changed, the researchers were able to calculate the point at which naloxone administration should be triggered.

“The most challenging aspect of developing an engineering solution to prevent overdose mortality is simultaneously addressing patient adherence and willingness to adopt new technology, combating stigma, minimizing false positive detections, and ensuring the rapid delivery of antidotes,” says Huang. “Our proposed solution tackles these unmet needs by developing a miniaturized robotic implant equipped with multisensing modalities, continuous monitoring capabilities, on-board decision making, and an innovative micropumping mechanism.”

The device also includes a small reservoir that can carry up to 10 milligrams of naloxone. When an overdose is detected, it triggers a pump that ejects the naloxone, which is released within about 10 seconds.

In their animal studies, the researchers found that this drug administration could reverse the effects of an overdose 96 percent of the time.

“We created a closed-loop system that can sense the onset of the opiate overdose and then release the antidote, and then you see that recovery,” Traverso says.

Preventing overdoses

The researchers envision that this technology could be used to help people who are at the highest risk of overdose, beginning with people who have had a previous overdose. They now plan to investigate how to make the device as user-friendly as possible, studying factors such as the optimal location for implantation.

“A key pillar of addressing the opioid epidemic is providing naloxone to individuals at key moments of risk. Our vision for this device is for it to integrate into the cascade of harm-reduction strategies to efficiently and safely deliver naloxone, preventing death from opioid overdose and providing the opportunity to support individuals with opioid use disorder,” says Chai.

The researchers hope to be able to test the device in humans within the next three to five years. They are now working on miniaturizing the device further and optimizing the on-board battery, which currently can provide power for about two weeks.

The research was funded by Novo Nordisk, the McGraw Family Foundation at Brigham and Women’s Hospital, and the MIT Department of Mechanical Engineering.

AI in Search? The Grumpy Designer Isn’t Impressed So Far – Speckyboy

Tech companies are baking AI into everything these days. It seems like you can’t avoid a heaping helping of bots and large language models (LLMs). I think I ingested some in my breakfast cereal this morning.

Thus, it’s no surprise that search engines have become best pals with AI. Google and Bing are joyfully adding it to their results. These generated answers are the first thing you see for some queries.

Both companies have a stake in the technology. Google’s Gemini and Microsoft’s Copilot will be keys to their future success. We’ll continue to see these tools added to flagship products.

The early results have been interesting – and perhaps a bit unsettling. For example, AI has recommended that we put glue on pizza. It has also displayed plagiarized content above the original works.

It’s just a reminder that no technology is perfect. And AI is still in its infancy. But there’s more to it. The relationship between AI and search represents a fundamental shift. I’m not so sure I like it. Here’s a look at why.

The Shift to Becoming an Answer Engine

The way search engines work has evolved. In the early days, it was all about matching the keywords used in a query.

That’s why keyword-stuffing and other nefarious SEO techniques worked. Search engines were looking for exact (or fuzzy) matches of keywords. It led to less-than-ideal results. Spammers were great at gaming this system.

Modern search now considers context. It combines content and structured data to determine results. That’s why we can search for “pizza shops near me” and get local results.

As always, these services pick winners. The top results favor sites that match the search engine’s indexing criteria and algorithm. The algorithms are mysterious to us mortals – but fair enough.

So, how does the current use of AI impact this process? For one, it attempts to provide us with a definitive answer.

Let’s forget about accuracy for a moment. Displaying this information first lends confidence to the answer. If it’s first, it must be right – right? Maybe we can skip all the results down the page.

We are no longer encouraged to look for the result that fits our needs. We are instead fed an answer – potentially discouraging us from digging deeper.

Google adds AI-generated answers to the top of the results page

The Cost of AI-generated Answers

Perhaps the convenience of an AI-generated answer is favorable. But it also comes with some costs.

Website owners could see a drop in traffic. They already had to contend with sponsors clogging up the top portion of the results page. AI answers are just one more thing to hamper their visibility.

The other elephant in the room is that AI scrapes content from all over the web. The benefits for site owners are questionable at best.

A site that feeds the top result could see some extra clicks. Newfangled services like Perplexity are even offering to pay publishers. However, you might have better odds of winning the lottery than securing this arrangement.

We should also dive back into AI’s potential to discourage further research. Some users may accept that first answer and not bother to think twice.

Maybe this doesn’t impact longtime users. I’m used to scrolling through search results and clicking multiple links. I don’t foresee AI changing my behavior.

But what about younger generations? AI will be just the way things work for them. They may not realize that there’s more information available. After all, Google has already given them the “best” answer.

Here’s where accuracy comes into the picture. There are times when search engines will get it wrong. That seems like an unavoidable situation.

Users who aren’t familiar with researching answers will be misinformed. That could be dangerous, depending on the subject.

Most people won’t put glue on their pizza. But this type of “advice” could be taken seriously by someone. And that has real consequences.

Bing tells us that putting glue on pizza is a bad idea after all

Is This the End of Search as We Knew It?

I believe the relationship between search and AI is a long-term one. Companies like Google and Microsoft aren’t spending truckloads of money for nothing. Well, sometimes they do. But I digress.

The current phase is an experimental one. Search providers are trying to figure out where AI fits in. And, oh yeah, they want to monetize it.

There’s been some backlash at the technology’s integration so far. That has led to adjustments. It’s a matter of finding what users will and won’t tolerate.

Regardless, searching the web is going to look quite different. Sponsored and AI-generated results will continue to push organic results down the page. Large websites will rank higher than small ones.

Search is a pay-to-play proposition these days. AI is only going to amplify this practice.

That changes how we search as consumers. We may need to scroll past a lot of nonsense to find what we came for.

It might also change our expectations as website owners. That free traffic we’ve optimized for may not be as plentiful. We’ll have to adjust accordingly.

The old ways of searching the web appear to be obsolete

What Will the Future Bring?

Search is another area where web designers and marketers will feel the impact of the move toward AI. The techniques that previously performed well for us may be obsolete.

SEO will still be a worthwhile endeavor, though. Getting your websites indexed shouldn’t go out of style anytime soon.

However, using SEO as a primary marketing strategy doesn’t seem sustainable. Unless your clients are large or in a unique niche, you may struggle to make headway without paid promotion.

Such is life on the web. We can never get too comfortable! Search engines are just another in a long line of seismic shifts.

Related Topics


Top

Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.

The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.

Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.

“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”

What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.

“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”

The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.

At the front

A vast rocky scene of Mars has two labels. On left is “Wildcat Ridge” and on right is “Skinner Ridge.”

NASA’s Perseverance rover collected rock samples from two locations seen in this image of Mars’ Jezero Crater: “Wildcat Ridge” (lower left) and “Skinner Ridge” (upper right).

Credit: NASA/JPL-Caltech/ASU/MSSS


The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.

In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

Aerial view shows the brown and tan landscape of Mars. Circular holes show where the rover extracted samples.

Composed of multiple images from NASA’s Perseverance Mars rover, this mosaic shows a rocky outcrop called “Wildcat Ridge,” where the rover extracted two rock cores and abraded a circular patch to investigate the rock’s composition.

Credit: NASA/JPL-Caltech/ASU/MSSS


Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.

Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.

“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”

Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.

“However salty it was, if there were any organics present, it’s like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”

Fuzzy fingerprints

But the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.

And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.

In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.

“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”

Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.

“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”

When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.

“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.

This work was supported, in part, by NASA.

X agrees to halt use of certain EU data for AI chatbot training

Recently, the European Union became the centre stage of a data privacy controversy related to the social media platform X. On August 8, an Irish court declared that X had agreed to suspend the use of all data belonging to European Union citizens, which had been…

xAI unveils Grok-2 to challenge the AI hierarchy

xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. Alongside Grok-2, xAI has introduced Grok-2 mini, a smaller but capable version of the main model. Both are currently in beta on X and will be made…

Reshaping Data Centres and the Digital Landscape | AI News

Artificial intelligence is changing the world and is projected to have a global market value of $2-4 trillion USD by 2030. The future is now, and it feels as though we’re witnessing a big bang in technology every couple of months. AI has crept into every…

MIT researchers use large language models to flag problems in complex systems

Identifying one faulty turbine in a wind farm, which can involve looking at hundreds of signals and millions of data points, is akin to finding a needle in a haystack.

Engineers often streamline this complex problem using deep-learning models that can detect anomalies in measurements taken repeatedly over time by each turbine, known as time-series data.

But with hundreds of wind turbines recording dozens of signals each hour, training a deep-learning model to analyze time-series data is costly and cumbersome. This is compounded by the fact that the model may need to be retrained after deployment, and wind farm operators may lack the necessary machine-learning expertise.

In a new study, MIT researchers found that large language models (LLMs) hold the potential to be more efficient anomaly detectors for time-series data. Importantly, these pretrained models can be deployed right out of the box.

The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline.

While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model.

“Since this is just the first iteration, we didn’t expect to get there from the first go, but these results show that there’s an opportunity here to leverage LLMs for complex anomaly detection tasks,” says Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.

Her co-authors include Linh Nguyen, an EECS graduate student; Laure Berti-Equille, a research director at the French National Research Institute for Sustainable Development; and senior author Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems. The research will be presented at the IEEE Conference on Data Science and Advanced Analytics.

An off-the-shelf solution

Large language models are autoregressive, which means they can understand that the newest values in sequential data depend on previous values. For instance, models like GPT-4 can predict the next word in a sentence using the words that precede it.

Since time-series data are sequential, the researchers thought the autoregressive nature of LLMs might make them well-suited for detecting anomalies in this type of data.

However, they wanted to develop a technique that avoids fine-tuning, a process in which engineers retrain a general-purpose LLM on a small amount of task-specific data to make it an expert at one task. Instead, the researchers deploy an LLM off the shelf, with no additional training steps.

But before they could deploy it, they had to convert time-series data into text-based inputs the language model could handle.

They accomplished this through a sequence of transformations that capture the most important parts of the time series while representing data with the fewest number of tokens. Tokens are the basic inputs for an LLM, and more tokens require more computation.

“If you don’t handle these steps very carefully, you might end up chopping off some part of your data that does matter, losing that information,” Alnegheimish says.

Once they had figured out how to transform time-series data, the researchers developed two anomaly detection approaches.

Approaches for anomaly detection

For the first, which they call Prompter, they feed the prepared data into the model and prompt it to locate anomalous values.

“We had to iterate a number of times to figure out the right prompts for one specific time series. It is not easy to understand how these LLMs ingest and process the data,” Alnegheimish adds.

For the second approach, called Detector, they use the LLM as a forecaster to predict the next value from a time series. The researchers compare the predicted value to the actual value. A large discrepancy suggests that the real value is likely an anomaly.

With Detector, the LLM would be part of an anomaly detection pipeline, while Prompter would complete the task on its own. In practice, Detector performed better than Prompter, which generated many false positives.

“I think, with the Prompter approach, we were asking the LLM to jump through too many hoops. We were giving it a harder problem to solve,” says Veeramachaneni.

When they compared both approaches to current techniques, Detector outperformed transformer-based AI models on seven of the 11 datasets they evaluated, even though the LLM required no training or fine-tuning.

In the future, an LLM may also be able to provide plain language explanations with its predictions, so an operator could be better able to understand why an LLM identified a certain data point as anomalous.

However, state-of-the-art deep learning models outperformed LLMs by a wide margin, showing that there is still work to do before an LLM could be used for anomaly detection.

“What will it take to get to the point where it is doing as well as these state-of-the-art models? That is the million-dollar question staring at us right now. An LLM-based anomaly detector needs to be a game-changer for us to justify this sort of effort,” Veeramachaneni says.

Moving forward, the researchers want to see if finetuning can improve performance, though that would require additional time, cost, and expertise for training.

Their LLM approaches also take between 30 minutes and two hours to produce results, so increasing the speed is a key area of future work. The researchers also want to probe LLMs to understand how they perform anomaly detection, in the hopes of finding a way to boost their performance.

“When it comes to complex tasks like anomaly detection in time series, LLMs really are a contender. Maybe other complex tasks can be addressed with LLMs, as well?” says Alnegheimish.

This research was supported by SES S.A., Iberdrola and ScottishPower Renewables, and Hyundai Motor Company.

Microsoft and Apple Step Back from OpenAI’s Board Amid Antitrust Concerns

As large tech firms expand through acquisitions and advancements, regulatory bodies express concerns about potential anti-competitive practices. FAMGA (Facebook, Apple, Microsoft, Google, Amazon) has invested $59 billion in AI research. The rapid growth in these companies’ influence has prompted new antitrust regulations to focus on fair…

How Much Do People Trust AI in 2024?

As artificial intelligence permeates various aspects of people’s lives, understanding their trust in technology becomes increasingly vital. Despite its potential to revolutionize industries and improve daily life, AI comes with a mix of fascination and skepticism. Knowing how the public generally feels about it and how…