Global semiconductor shortage: How the US plans to close the talent gap

The semiconductor industry, which is a cornerstone of modern technology and economic prosperity, has been dealing with a serious labour shortage for some time. The skills shortage appears to be worsening, with more than one million additional skilled workers required by 2030 to meet demand in…

Implantable microphone could lead to fully internal cochlear implants

Cochlear implants, tiny electronic devices that can provide a sense of sound to people who are deaf or hard of hearing, have helped improve hearing for more than a million people worldwide, according to the National Institutes of Health.

However, cochlear implants today are only partially implanted, and they rely on external hardware that typically sits on the side of the head. These components restrict users, who can’t, for instance, swim, exercise, or sleep while wearing the external unit, and they may cause others to forgo the implant altogether.

On the way to creating a fully internal cochlear implant, a multidisciplinary team of researchers at MIT, Massachusetts Eye and Ear, Harvard Medical School, and Columbia University has produced an implantable microphone that performs as well as commercial external hearing aid microphones. The microphone remains one of the largest roadblocks to adopting a fully internalized cochlear implant.

This tiny microphone, a sensor produced from a biocompatible piezoelectric material, measures miniscule movements on the underside of the ear drum. Piezoelectric materials generate an electric charge when compressed or stretched. To maximize the device’s performance, the team also developed a low-noise amplifier that enhances the signal while minimizing noise from the electronics.

While many challenges must be overcome before such a microphone could be used with a cochlear implant, the collaborative team looks forward to further refining and testing this prototype, which builds off work begun at MIT and Mass Eye and Ear more than a decade ago.

“It starts with the ear doctors who are with this every day of the week, trying to improve people’s hearing, recognizing a need, and bringing that need to us. If it weren’t for this team collaboration, we wouldn’t be where we are today,” says Jeffrey Lang, the Vitesse Professor of Electrical Engineering, a member of the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the microphone.

Lang’s coauthors include co-lead authors Emma Wawrzynek, an electrical engineering and computer science (EECS) graduate student, and Aaron Yeiser SM ’21; as well as mechanical engineering graduate student John Zhang; Lukas Graf and Christopher McHugh of Mass Eye and Ear; Ioannis Kymissis, the Kenneth Brayer Professor of Electrical Engineering at Columbia; Elizabeth S. Olson, a professor of biomedical engineering and auditory biophysics at Columbia; and co-senior author Hideko Heidi Nakajima, an associate professor of otolaryngology-head and neck surgery at Harvard Medical School and Mass Eye and Ear. The research is published today in the Journal of Micromechanics and Microengineering.

Overcoming an implant impasse

Cochlear implant microphones are usually placed on the side of the head, which means that users can’t take advantage of noise filtering and sound localization cues provided by the structure of the outer ear.

Fully implantable microphones offer many advantages. But most devices currently in development, which sense sound under the skin or motion of middle ear bones, can struggle to capture soft sounds and wide frequencies.

For the new microphone, the team targeted a part of the middle ear called the umbo. The umbo vibrates unidirectionally (inward and outward), making it easier to sense these simple movements.

Although the umbo has the largest range of movement of the middle-ear bones, it only moves by a few nanometers. Developing a device to measure such diminutive vibrations presents its own challenges.

On top of that, any implantable sensor must be biocompatible and able to withstand the body’s humid, dynamic environment without causing harm, which limits the materials that can be used.

“Our goal is that a surgeon implants this device at the same time as the cochlear implant and internalized processor, which means optimizing the surgery while working around the internal structures of the ear without disrupting any of the processes that go on in there,” Wawrzynek says.

With careful engineering, the team overcame these challenges.

They created the UmboMic, a triangular, 3-millimeter by 3-millimeter motion sensor composed of two layers of a biocompatible piezoelectric material called polyvinylidene difluoride (PVDF). These PVDF layers are sandwiched on either side of a flexible printed circuit board (PCB), forming a microphone that is about the size of a grain of rice and 200 micrometers thick. (An average human hair is about 100 micrometers thick.)

The narrow tip of the UmboMic would be placed against the umbo. When the umbo vibrates and pushes against the piezoelectric material, the PVDF layers bend and generate electric charges, which are measured by electrodes in the PCB layer.

Amplifying performance

The team used a “PVDF sandwich” design to reduce noise. When the sensor is bent, one layer of PVDF produces a positive charge and the other produces a negative charge. Electrical interference adds to both equally, so taking the difference between the charges cancels out the noise.

Using PVDF provides many advantages, but the material made fabrication especially difficult. PVDF loses its piezoelectric properties when exposed to temperatures above around 80 degrees Celsius, yet very high temperatures are needed to vaporize and deposit titanium, another biocompatible material, onto the sensor. Wawrzynek worked around this problem by depositing the titanium gradually and employing a heat sink to cool the PVDF.

But developing the sensor was only half the battle — umbo vibrations are so tiny that the team needed to amplify the signal without introducing too much noise. When they couldn’t find a suitable low-noise amplifier that also used very little power, they built their own.

With both prototypes in place, the researchers tested the UmboMic in human ear bones from cadavers and found that it had robust performance within the intensity and frequency range of human speech. The microphone and amplifier together also have a low noise floor, which means they could distinguish very quiet sounds from the overall noise level.

“One thing we saw that was really interesting is that the frequency response of the sensor is influenced by the anatomy of the ear we are experimenting on, because the umbo moves slightly differently in different people’s ears,” Wawrzynek says.

The researchers are preparing to launch live animal studies to further explore this finding. These experiments will also help them determine how the UmboMic responds to being implanted.

In addition, they are studying ways to encapsulate the sensor so it can remain in the body safely for up to 10 years but still be flexible enough to capture vibrations. Implants are often packaged in titanium, which would be too rigid for the UmboMic. They also plan to explore methods for mounting the UmboMic that won’t introduce vibrations.

“The results in this paper show the necessary broad-band response and low noise needed to act as an acoustic sensor. This result is surprising, because the bandwidth and noise floor are so competitive with the commercial hearing aid microphone. This performance shows the promise of the approach, which should inspire others to adopt this concept. I would expect that smaller size sensing elements and lower power electronics would be needed for next generation devices to enhance ease of implantation and battery life issues,” says Karl Grosh, professor of mechanical engineering at the University of Michigan, who was not involved with this work.

This research was funded, in part, by the National Institutes of Health, the National Science Foundation, the Cloetta Foundation in Zurich, Switzerland, and the Research Fund of the University of Basel, Switzerland.

Capcom Next Recap: Dead Rising Remaster Gameplay And Release Date, Kunitsu-Gami Demo, And Small Tease For The Next Resident Evil

Capcom Next was a more succint affair this summer. It focused on providing updates for only three titles: Kunitsu-Gami: Path of the Goddess, Dead Rising Deluxe Remaster, and the iOS/Mac port of Resident Evil 7: Biohazard. Here’s a brief summary of each announcement, which includes the smallest of teases for the next Resident Evil title. 

[embedded content]

Kunitsu-Gami: Path of the Goddess

Ahead of its launch on July 19, the fantastical action-strategy hybrid is getting a demo later today on all platforms the game is available on. The demo will also feature crossover content with Okami in the form of themed weapons and costumes for the main character and the maiden. Interestingly, the Okami stuff will only appear in the demo, but Capcom states that if enough people play it, these items will become available in the full game at launch via an update.

[embedded content]

Dead Rising Deluxe Remaster

The remaster of the 2006 zombie romp is coming on September 19 (a physical version arrives in November). We got our first look at gameplay and several graphical “before/after” comparison shots showcasing how dramatic the presentational improvements really are. The game will run at 4K 60FPS and features real-time lighting that changes with the time of day. Gameplay improvements include the ability to move while aiming, an autosave feature, a better UI, and improved NPC behavior. Dead Rising Deluxe Remaster will be available on PlayStation 5, Xbox Series X/S, and PC. 

Resident Evil 7: Biohazard on iOS and Mac

The latest RE mobile port features improved touched controls and a new auto-aim/auto-fire feature for those who desire extra assistance. Menu and inventory is now touch compatible. Capcom reveals the first section of the game is free-to-play, allowing players to give it a try before committing to a purchase. 

Additionally, Capcom confirms what was likely already obvious: a new Resident Evil is in development. RE7 director Koshi Nakanishi is apparently back at the helm and stated, “It was really difficult to figure out what to do after 7. But I found it, and to be honest, it feels substantial. I can’t share any details just yet, but I hope you’re excited for the day I can.”

Kamatera Review – The Best Scalable Cloud Host Yet?

This Kamatera review will help you decide whether the web host is the best option for you!  Being able to scale your resource demand effortlessly as your website grows… paying only for the resources you use… no-single-point-of-failure security guarantee… what’s not to love about cloud hosting?…

Can AI Become a Plant Whisperer to Help Feed the World?

With the power of AI and big data, scientists are pursuing exciting new frontiers in decoding the complex world of plant genomes for next-gen custom plant breeding that could revolutionize food security and adaptation to climate change. A stalk of wheat, a cane of sugar. To…

Putting The Seoul In Console

Return to top

With just four days in Seoul, South Korea, I filled my maps app with pins of restaurants, Buddhist temples, must-see attractions, scenic parks, market streets, and more to visit. I clocked around 10 miles of walking daily and think I saw as much of this massive city as possible with my allotted time. My journey from east to west, north to south of Seoul was only possible by the city’s expansive public transportation network of buses and trains. And while I listened to “Magnetic” by K-Pop group ILLIT more times than I’ll admit (when in Korea, right?) through headphones on these trains and buses, I spent much of my time observing how others spent time waiting for their stop.

Perhaps unsurprisingly, everyone is glued to their phones, myself included. But unlike me, doom scrolling on X (formerly Twitter) before switching to Instagram before switching back to X, a lot of people were playing games I recognized, like League of Legends’ auto chess spin-off, Teamfight Tactics. But there we also plenty of other games I didn’t know, like Light of the Stars, Soul Strike, and more. While touring one of Nexon’s Seoul-based studios, Magnum Studio, I asked its head, Beomjun Lee, if mobile gaming is as popular as my public transportation travels had me believe. His answer was a quick yes. A study published by Statista Research Department in February concludes that, according to its 2022 survey, 63 percent of South Koreans play mobile games, with the market having an estimated worth of 14 trillion South Korean Won (or $10.2 billion) that year.

Nexon, the company that invited me to its studio, has plenty of mobile hits, like FIFA Mobile and MapleStory M, and a good amount on PC, too. Based on how many PC cafes I saw in Seoul, I’d guess PC is the biggest gaming market in South Korea or close behind mobile. But its staple of console releases features just two so far: KartRider: Drift and last year’s The Finals. With its PC and mobile gaming on lock, Nexon is slowly aiming West, looking to break into global markets and focusing on console releases alongside its usual output to do so. And what better way to do that than with a free-to-play (easy entry), third-person (ripe for customization), looter-shooter (a genre popularized by the likes of Destiny and Warframe that continues to dominate a large mindshare of games)?

Ambitions In Albion

Ambitions In Albion

The First Descendant is just that, and though I was weary of another free-to-play game, and another looter shooter at that, after an hour of hands-on time, I’m excited, antsy even, for its release this summer when I can play more.

Revealed last August as part of Gamescom 2023, The First Descendant is in development at Nexon’s Magnum Studio with its sights set on a Summer 2024 release. I pushed for a more exact release date, but the team wasn’t ready to share; it’s clear it’s working hard to polish it up in these last few months, and for good reason, too – the team has lofty ambitions with The First Descendant.

“The main feature of The First Descendant is the PvE co-op element,” Lee, who is also the lead producer of the game, tells me through a translator. “It’s an online shooter RPG, and we consider it the next generation of looter shooters.”

That term caught me by surprise. It’s a bold statement, almost braggadocious, but after talking with Lee and creative director Minseok Joo and playing the game for an hour, I understand where the team is coming from. In my early hands-on impressions, the First Descendant feels like a mish-mash of other greats in the genre. Taken literally, it’s also a looter shooter made exclusively for the “next generation” of consoles as it’s coming to PlayStation 5 and Xbox Series X/S alongside PC, with crossplay and cross-progression, too.

Going Hands-On

Going Hands-On

Dropped to an Earth-like sci-fi world where humanity is on its last leg in a city called Albion, my chosen character, Viessa, is searching for something called the Ironheart. She’s joined by an ally named Bunny (yes, her suit’s silhouette is that of a bunny). Immediately, weapons are crunchy and tactile. I sense every bullet in the controller and the on-screen recoil, and it feels great. It helps that the entire game, developed from the ground up in Unreal Engine 5, is gorgeous. I joke with Lee that I’m happy the team is making a console version of the game as The First Descendant will melt my PC, which is admittedly due for an upgrade. Seeing words like “frame generation,” “ray reconstruction,” and “ray tracing” in the options confirms my belief.

The weapons aren’t anything special, though. In my play session, I encounter machine guns, submachine guns, shotguns, grenade launchers, and long-range snipers. They all feel great, but The First Descendant isn’t doing anything new here. Each character’s magical powers are what makes combat distinct. Viessa has access to ice, with a passive skill that creates spheres of ice around her body to damage and slow enemies that get too close, and four active skills that do area-of-effect damage, increase running speed and shield, and more. She can even place a snowstorm onto the playfield, damaging and immobilizing those caught within.

Her abilities are wildly different from Valby, the water-based character I’d play as later. Valby consumes less mana when standing in water and has moves to create puddles, making for a rewarding ability loop. She can even liquify the area around her, allowing her to move through enemies with increased defense and speed. Viessa’s moves are more straightforward, but Valby’s is more rewarding as part of a co-op experience, even if it takes longer to get my sea legs.

[embedded content]

As I progress through the prologue, I encounter Karel, The First Descendant’s big bad. He immediately, seemingly, kills Bunny, and it’s clear he’s not mincing words. He will do whatever he must to obtain the Ironheart.

Unfortunately for Viessa, at the ready to avenge Bunny, Karel dips, leaving a Gravewalker tank boss behind. This boss fight (and the Stunning Beauty boss I’ll take on later while playing as Valby alongside a developer from Magnum Studio) is the highlight of my time with The First Descendant. Each boss has its own set of moves and mechanics to follow, including checks that require more strategic work, but how I, the player, fight them most intrigues me.

The First Descendant is fast. The characters move swiftly, and abilities, which fly loosely, allow them to zip around in combat. I can imagine the magical chaos that ensues with a full team of four. But the grapple hook excites me most about the possibilities of The First Descendant’s combat.

While fighting enemies, I scan by clicking the right stick to find weak points highlighted in blue. After shooting them enough, they turn yellow, meaning it’s time for my favorite part of the game: grappling up to the yellow part and ripping it off. It’s an awesome mechanic and takes an experience I’ve played hundreds of times in looter shooters – shoot the boss a bunch – and makes it more dynamic. It’s not just about shooting; it’s about blasting a weak point long enough that I can grapple to it and then work to yank it right off, shedding the boss’ layers as I do.

Crafting Your Descendant

Crafting Your Descendant

Outside of combat, the game offers plenty of customization that powers free-to-play experiences, though I don’t know how microtransactions will play into the game. You can customize loadouts for every character, each with their own weapons and abilities. There are a ton of costumes, ranging from maid outfits to fire brigade uniforms and more, and you can customize various areas of your character with unique chest pieces, Fortnite-style back pieces, and more. You can test out all of this in Albion’s Lab, a test field with customizable dummies to check your loadout’s damage output, feel, and more. Speaking to the team’s commitment to the game and its community, this Lab was recently added following feedback from a recent beta.

“This is my first time seeing it,” Lee says while showing it to me, indicating just how recently it was added. He says players can expect the game to change and grow with the community in this way.

I’m always nervous about free-to-play games and the associated monetization, but if The First Descendant sticks to cosmetic-focused microtransactions, as opposed to letting players pay to perform better in combat, for example, Magnum Studio is on the right track with the wealth of options I see for character customization.

I like that each character so far feels quite different, and leveling each up individually, instead of focusing on a single character for months or years, seems like a smart call in contrast with the genre. Knowing that three friends playing will have various Descendants to choose from, allowing for multiple strategies in how we approach missions, is exciting.

As for keeping players engaged beyond the game’s initial launch, Lee says the team is taking a seasonal approach, with new battle passes in each drop. As is now the standard in the live-service genre, each battle pass will contain season-specific cosmetics, and you’ll need to play through the new content to obtain them.

With an hour of The First Descendant playtime behind me, including a studio tour and interview with the team’s leads, I am (im)patiently waiting for its release this summer. Despite my initial love of Destiny and attempts in Warframe at one point in my gaming history, both (and many others in the genre) have passed me by. Jumping back into them is too daunt- ing and too confusing today. But The First Descendant is giving me what I want from those games, with variations on the formula, too. I still have questions, but Nexon still has time to answer them. For now, I’m crossing my fingers I get into the next beta.


This article originally appeared in Issue 366 of Game Informer.

Skeleton Key AI attacks unlock malicious content – CyberTalk

EXECUTIVE SUMMARY:

A newly discovered jailbreak – also known as a direct prompt injection attack – called Skeleton Key, affects numerous generative AI models. A successful Skeleton Key attack subverts most, if not all, of the AI safety guardrails that LLM developers built into models.

In other words, Skeleton Key attacks coax AI chatbots into violating operators’ policies under the auspices of assisting users. Skeleton Key attacks will bend the rules and force the AI to produce dangerous, inappropriate or otherwise socially unacceptable content.

Skeleton Key example

Ask a chatbot for a Molotov cocktail recipe and the chatbot will say something to the effect of ‘I’m sorry, but I can’t assist with that’. However, if asked indirectly…

Researchers explained to an AI model that they aimed to conduct historical, ethical research pertaining to Molotov cocktails. They expressed their disinclination to make one, but in the context of research, could the AI provide Molotov cocktail development information?

The chatbot complied, providing a Molotov cocktail materials list, along with unambiguous assembly information.

Although this kind of info is easily accessible online (how to create a Molotov cocktail isn’t exactly a well-kept secret), there’s concern that these types of AI guardrail manipulations could fuel home-grown hate groups, worsen urban violence, lead to the erosion of social cohesion…etc.

Skeleton Key challenges

Microsoft tested the Skeleton Key jailbreak from April to May of this year, evaluating a diverse set of tasks across risk and safety content categories – not just Molotov cocktail development instructions.

As described above, Skeleton Key enables users to force AI to provide information that would ordinarily be forbidden.

The Skeleton Key jailbreak worked on AI models ranging from Gemini, to Mistral, to Anthropic. GPT-4 showed some resistance to Skeleton Key, according to Microsoft.

Chatbots commonly provide users with warnings around potentially offensive or harmful output (noting that it might be considered offensive, harmful or illegal if proceeded with), but the chatbots will not altogether refuse to provide the information; the core issue here.

Skeleton Key solutions

To address the problem, vendors suggest leveraging input filtering tools, as to prevent certain kinds of inputs, including those intended to slip past prompt safeguards. In addition, post-processing output filters may be able to identify model outputs that breach safety criteria. And AI-powered abuse monitoring systems can further efforts to detect instances of questionable chatbot use.

Microsoft has offered specific guidance around the creation of a messaging framework that trains LLMs on acceptable technology use and that tells the LLM to monitor for attempts to undermine guardrail instructions.

“Customers who are building their own AI models and/or integrating AI into their applications [should] consider how this type of attack could impact their threat model and to add this knowledge to their AI red team approach, using tools such as PyRIT,” says Microsoft Azure CTO, Mark Russinovich.

For more on this story, click here. For information about the related BEAST technique, click here. To see how else generative AI is liable to affect CISOs and cyber security teams, read this Cyber Talk article.

Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

A Look At Dragon Age: The Veilguard’s Difficulty Options And Gameplay Customization

Throughout my visit to BioWare’s Edmonton office for Game Informer’s current cover story about Dragon Age: The Veilguard, game director Corinne Busche reiterates that the studio designed the game with inclusivity in mind. That’s extremely evident in the character creator, where players begin their journey in Veilguard. It’s easily the best character creator in series history and possibly the most robust I’ve ever seen in a video game. From hundreds of sliders and options to customize your player-controlled Rook to the ability to pick pronouns separate from gender and more, this character creator speaks directly to the inclusivity of Veilguard – read about my in-depth look at the character creator here

But that feeling doesn’t end in the character creator. It also extends to the world – ice mage and private detective companion Neve Gallus has a prosthetic leg, for example – and in the way you can play Veilguard. 

Dragon Age: The Veilguard Dreadwolf Game Informer Cover Story

Before starting the game proper, a playstyle screen allows players to customize various options affecting how Veilguard plays. Here, you can select difficulty, or playstyle as BioWare calls it, with options like “Storyteller” for those interested more in the story versus the combat, “Adventurer” for an experience that seemingly balances story and combat, and a difficulty called Nightmare – there might be more, but this is all I see during my demo. At any point during Veilguard, you can change the game’s difficulty unless you select Nightmare, which is the hardest of difficulties. That’s a permanent selection. 

There’s another difficulty option called Unbound, though, allowing players to customize their gameplay experience to their liking. You can adjust how wayfinding helps you in-game; there’s aim assistance and even an auto-aim option. You can adjust combat timing to make parrying easier or harder, with a balanced, forgiving, and a third harder option. You can change how much damage your enemies do to you, and how much damage you do to enemies by adjusting their health. There’s also an option to adjust enemy pressure. And, if you’re not interested in death-related setbacks, there’s a no-death option you can turn on. 

“[None of these options] are a cheat,” Busche tells me. “It’s an option to make sure players of all abilities can show up.” 

She also says players can look forward to similar accessibility and approachability options you might expect, though I’m unable to pour through Veilguard’s other options to confirm exactly what’s there. 


For more about the game, including exclusive details, interviews, video features, and more, click the Dragon Age: The Veilguard hub button below.

NHL Breaks the Ice with Matrox, Vizrt & AWS for Cutting-Edge Live Clou – Videoguys

The blog post “NHL breaks the ice with cutting-edge live cloud production powered by AWS” by Andrew Reich, Alex Murel, and Luke Potter for Amazon Web Series details how the National Hockey League (NHL) has revolutionized live sports production by utilizing cloud technology from Amazon Web Services (AWS). The NHL’s transition to cloud-based production marks a significant shift from the traditional method of using production trucks and fixed control rooms, offering enhanced efficiency, scalability, and sustainability.

[embedded content]

Key Highlights:

  1. Historic Cloud-Based Broadcast:

    • On March 22, the NHL produced the first fully cloud-based live professional sports broadcast in North America for a game between the Washington Capitals and the Carolina Hurricanes.
    • The live broadcast was produced in 1080p using AWS technologies, managed by a remote team, demonstrating that cloud-based production can match traditional hardware functionality.
  2. Innovative Workflow:

    • The NHL’s Live Cloud Production (LCP) workflow enabled video and audio switching, replay, and graphics integration in the cloud.
    • A pilot initiative called “NHL EDGE Unlocked” showcased advanced data-driven storytelling with non-traditional camera angles and real-time puck and player tracking.
  3. Foundation and Collaboration:

    • The NHL’s partnership with AWS began in 2021, with AWS serving as the league’s Official Cloud Infrastructure Provider.
    • Key developments include the NHL EDGE IQ stats and a cloud-based encoding and scheduling pipeline, facilitating live game feeds from venues to AWS.
  4. Remote Collaboration and Flexibility:

    • The NHL successfully reduced on-site personnel and equipment, demonstrating remote collaboration’s potential by using minimal on-site gear.
    • A single AWS employee managed technical coordination at the arena, while production crews operated from remote locations like the NHL Network studios in New Jersey and NHL headquarters in Manhattan.
  5. Sustainability and Scalability:

    • The LCP significantly lowers carbon emissions and travel costs by reducing the need for production trucks and on-site staff.
    • It offers scalability for major events like the Stanley Cup Playoffs, enabling multiple feeds in different languages and formats with reduced on-site energy consumption.
  6. Enhanced Fan Experience:

    • The flexibility of cloud production allows for customized broadcasts tailored to various audience preferences, including avid fans, casual viewers, and those interested in specific statistics or interactive features.
    • Advanced analytics and real-time access to footage enhance the storytelling and viewing experience, making content easily accessible and customizable.
  7. Technical Execution:

    • The broadcast involved feeds from ten on-site cameras encoded and sent to AWS, where video was processed and integrated with various production elements.
    • The system utilized technologies like AWS Elemental MediaConnect, Vizrt’s TriCaster Vectar, Viz Trio, and Evertz DreamCatcher for production switching, graphics, and replay.

The NHL’s partnership with AWS represents a pioneering step towards more sustainable, flexible, and immersive live sports broadcasts, setting a new standard in the industry.

Read the full article by Andrew Reich, Alex Norman, and Luke Potter for Amazon Web Services HERE

Detachable cardiac pacing lead may improve safety for cardiac patients

In 2012, Neil Armstrong, the first man to walk on the moon, died of post-surgery complications at the age of 82 following what should have been a routine heart surgery. Armstrong had undergone bypass surgery, the most common open-heart operation in the United States, and a surgery where the overall chance of death has dropped to almost zero.

Armstrong’s death was caused by heart damage that occurred during the removal of temporary cardiac pacing leads. Pacing leads are routinely used to monitor patients and protect against the risk of postoperative arrhythmias, including complete blockages, during the recovery period after cardiac surgery. However, because current methods rely on surgical suturing or direct insertion of electrodes to the heart tissue, trauma can occur during implantation and removal, increasing the potential for damage, bleeding, and device failure.

A coffee chat in 2019 about Armstrong’s untimely death helped inspire new research, published in the journal Science Translational MedicineThe research demonstrates findings that may offer a promising new platform for adhesive bioelectronic devices for cardiac monitoring, diagnosis, and treatment, and offer inspiration for the future development of bioadhesive electronics.

“While discussing the story, our team had a eureka moment that we probably could do something to prevent such complications by realizing a completely atraumatic version of it based on our bioadhesive technologies,” says Hyunwoo Yuk SM ’16, PhD ’21, a former MIT research scientist who is now the chief technology officer at SanaHeal. “It was such an exciting idea, and the rest was just making it happen.”

The team, comprising researchers affiliated with the lab of Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering, has introduced a 3D-printable bioadhesive pacing lead that can directly interface with cardiac tissue, supporting minimally invasive adhesive implantation and providing a detachment solution that allows for gentle removal. Yuk and Zhao are the corresponding authors of the study; former MIT researcher Jue Deng is the paper’s first author.

“This work introduces the first on-demand detachable bioadhesive version of temporary cardiac pacing lead that offers atraumatic application and removal of the device with enhanced safety while offering improved bioelectronic performance,” says Zhao.

The development of the bioadhesive pacing lead is a combination of technologies that the team has developed over the last several years in the field of bioadhesive, bioelectronics, and 3D printing. SanaHeal, a company born from the team’s ongoing work, is commercializing bioadhesive technologies for various clinical applications.

“We hope that our ongoing effort on commercialization of our bioadhesive technology might help faster clinical translation of our bioadhesive pacing lead as well,” says Yuk.