Researchers safely integrate fragile 2D materials into devices

Researchers safely integrate fragile 2D materials into devices

Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.

But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.

To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.

Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.

They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.

Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.

However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.

“Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”

Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research is published today in Nature Electronics.  

Advantageous attraction

Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.

Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.

Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.

However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.

Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.

Making the matrix

To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.

This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.

“Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.

This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.

And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.

Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.

In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.  

This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities.

The Friday Roundup – YouTube Lighting & Descript A.I

Lighting for YouTube Videos – Make Your Videos Stand Out A few years back now my wife wanted to start making YouTube videos. While most of the details in getting that to happen were quite straightforward the one subject that caused the most angst was the…

Automated system teaches users when to collaborate with an AI assistant

Automated system teaches users when to collaborate with an AI assistant

Artificial intelligence models that pick out patterns in images can often do so better than human eyes — but not always. If a radiologist is using an AI model to help her determine whether a patient’s X-rays show signs of pneumonia, when should she trust the model’s advice and when should she ignore it?

A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.

In this case, the training method might find situations where the radiologist trusts the model’s advice — except she shouldn’t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.

During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI’s performance.

The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.

Importantly, the researchers’ system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.

“So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use — there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of a paper about this training process.

The researchers envision that such onboarding will be a crucial part of training for medical professionals.

“One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented at the Conference on Neural Information Processing Systems.

Training that evolves

Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.

“The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he adds.

To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.

The system’s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.

The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa.

Perhaps the human mistakenly trusts the AI when images show a highway at night.

After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as “ignore AI when it is a highway during the night.”

These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI’s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI’s prediction.

If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.

“After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.

Onboarding boosts accuracy

The researchers tested this system with users on two tasks — detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).

They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers’ onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers’ onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.

Only the researchers’ onboarding procedure without recommendations improved users’ accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.

But providing recommendations without onboarding had the opposite effect — users not only performed worse, they took more time to make predictions.

“When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.

Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren’t enough data, the onboarding stage won’t be as effective, he says.

In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.

“People are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions,” says Dan Weld, professor emeritus at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, who was not involved with this research. “Mozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.”

This work is funded, in part, by the MIT-IBM Watson AI Lab.

Ori Team Moon Studios Reveals Action RPG, No Rest For The Wicked

Ori Team Moon Studios Reveals Action RPG, No Rest For The Wicked

Moon Studios, the team behind Ori and the Blind Forest and Ori and the Will of the Wisps, has revealed No Rest for the Wicked, its first action RPG. It will be released as an Early Access title on Steam sometime during the first quarter of 2024 before it later releases onto PlayStation 5 and Xbox Series X/S. 

No Rest for the Wicked features a hand-crafted world with a painterly art style. In it, players explore an island called Isola Sacra, spelunking through cavernous depths, lush forests, treacherous mountain passes, and more, according to publisher Private Division. Each location on Isola Sacra is home to protagonists “with their own problems,” a press release reads, as well as hidden treasures, creatures, and secrets. 

Check out the No Rest for the Wicked reveal trailer for yourself below

[embedded content]

Moon Studios says fights in the game are animation-driven, direct, and tactile, “allowing skilled players to combine visceral strikes and deadly moves.” No Rest for the Wicked prioritizes skill and timing over button-mashing, according to the team. 

“We have been dreaming of being able to contribute to the [action RPG] genre that we all grew up with and love,” Moon Studios co-founder and creative director Thomas Mahler writes in a press release. “After the success of Ori, it was clear to us that Moon was now mature enough to finally realize those dreams. We can’t wait to see how players will react to this entirely new take on the genre.” 

Here’s more about the game, straight from Moon Studios and Private Division: 

“In the year 841, a pivotal moment dawns upon the kingdom, marked by the passing of King Harol Bolein. A devastating conflict arises when a peaceful transition of power devolves into chaos.  In addition to this political turmoil, a deadly plague has reemerged on the remote island of Sacra, twisting the land and its inhabitants. Players must brandish their arms in an effort to quell both the grotesque beasts and the Kingdom’s invading army throughout a turbulent atmosphere where they are pulled in every direction.”

Outside of the single-player experience, No Rest for the Wicked’s multiplayer mode allows players to share their world and progress with up to three friends at their side in the campaign’s online co-op. Every quest, boss, and hidden secret is shared with those you play with. 

While No Rest for the Wicked’s debut at The Game Awards 2023 was nice, Moon Studios fans can watch the Wicked Inside digital showcase airing on March 1, 2024, for additional information about the game. 


What do you think of No Rest for the Wicked’s debut? Let us know in the comments below!

The Casting Of Frank Stone Is Supermassive’s Single-Player Horror Game Set In The Dead By Daylight Universe

The Casting Of Frank Stone Is Supermassive’s Single-Player Horror Game Set In The Dead By Daylight Universe

Earlier this year, Supermassive Games, the team behind The Quarry, The Dark Pictures Anthology, and Until Dawn, announced it was collaborating with Behaviour Interactive to create a single-player game set in the latter’s Dead by Daylight universe. After teasing an appearance at The Game Awards 2023 yesterday online, the teams have officially revealed The Casting of Frank Stone. It hits PlayStation 5, Xbox Series X/S, and PC sometime in 2024.

“The shadow of Frank Stone looms over Cedar Hills, a town forever altered by his violent past,” a press release reads. “As a group of young friends are about to discover, Stone’s blood-soaked legacy cuts deep, leaving scars across families, generations, and the very fabric of reality itself.” 

Check out The Casting of Frank Stone reveal trailer for yourself below

[embedded content]

The Casting of Frank Stone is set in Cedar Hills, Oregon. In the depths of a steel mill, the gruesome crimes of a sadistic killer spawn horrors beyond comprehension, according to a press release. Players will dive into the mysteries of these horrors with an all-new cast of characters in the Dead by Daylight universe. In classic Supermassive Games fashion, every decision players make shapes the story and impacts the fate of its various characters.  

Behaviour Interactive head of partnerships Mathieu Côté says the team knows its players have been interested in single-player narrative experiences for some time, noting that it’s “excited to expand the Dead by Daylight universe and explore new territory with Supermassive Games, a studio that is at the forefront of modern video game storytelling.” 

Supermassive Games studio director Steve Goss said in yesterday’s tease that this game is a “brand-new, single-player interactive story game set in the terrifying omniverse of Dead by Daylight.” Supermassive Games executive producer Traci Tufte adds, “Our game will be set outside the Entity’s Realm and feature the story of a new cast of characters who players will follow for an unprecedented experience beyond the fog.” 

The Casting of Frank Stone hits PlayStation 5, Xbox Series X/S, and PC via Steam, the Epic Games Store, and the Microsoft Store in 2024. 

Supermassive Games is also working on Little Nightmares III, which we learned about during Gamescom Opening Night Live 2023, also created and hosted by The Game Awards 2023’s Geoff Keighley. Check out 18 minutes of unsettling Little Nightmares III co-op gameplay here


Are you excited about The Casting of Frank Stone? Let us know in the comments below!