The Transformative Impact of AI on M&A Dealmaking

The integration of artificial intelligence (AI) into business is essential, especially for companies aiming to remain competitive. The business of mergers and acquisitions (M&A) is no exception. AI is already transforming M&A processes by increasing efficiency, mitigating risks, and uncovering new opportunities. The high stakes challenges…

Self-Evolving AI: Are We Entering the Era of AI That Builds Itself?

For years, artificial intelligence (AI) has been a tool crafted and refined by human hands, from data preparation to fine-tuning models. While powerful at specific tasks, today’s AIs rely heavily on human guidance and cannot adapt beyond its initial programming. This dependence limits AI’s ability to…

Generative AI: Disparities between C-suite and practitioners

A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI. The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the…

Turning automotive engines into modular chemical plants to make green fuels

Reducing methane emissions is a top priority in the fight against climate change because of its propensity to trap heat in the atmosphere: Methane’s warming effects are 84 times more potent than CO2 over a 20-year timescale.

And yet, as the main component of natural gas, methane is also a valuable fuel and a precursor to several important chemicals. The main barrier to using methane emissions to create carbon-negative materials is that human sources of methane gas — landfills, farms, and oil and gas wells — are relatively small and spread out across large areas, while traditional chemical processing facilities are huge and centralized. That makes it prohibitively expensive to capture, transport, and convert methane gas into anything useful. As a result, most companies burn or “flare” their methane at the site where it’s emitted, seeing it as a sunk cost and an environmental liability.

The MIT spinout Emvolon is taking a new approach to processing methane by repurposing automotive engines to serve as modular, cost-effective chemical plants. The company’s systems can take methane gas and produce liquid fuels like methanol and ammonia on-site; these fuels can then be used or transported in standard truck containers.

“We see this as a new way of chemical manufacturing,” Emvolon co-founder and CEO Emmanuel Kasseris SM ’07, PhD ’11 says. “We’re starting with methane because methane is an abundant emission that we can use as a resource. With methane, we can solve two problems at the same time: About 15 percent of global greenhouse gas emissions come from hard-to-abate sectors that need green fuel, like shipping, aviation, heavy heavy-duty trucks, and rail. Then another 15 percent of emissions come from distributed methane emissions like landfills and oil wells.”

By using mass-produced engines and eliminating the need to invest in infrastructure like pipelines, the company says it’s making methane conversion economically attractive enough to be adopted at scale. The system can also take green hydrogen produced by intermittent renewables and turn it into ammonia, another fuel that can also be used to decarbonize fertilizers.

“In the future, we’re going to need green fuels because you can’t electrify a large ship or plane — you have to use a high-energy-density, low-carbon-footprint, low-cost liquid fuel,” Kasseris says. “The energy resources to produce those green fuels are either distributed, as is the case with methane, or variable, like wind. So, you cannot have a massive plant [producing green fuels] that has its own zip code. You either have to be distributed or variable, and both of those approaches lend themselves to this modular design.”

From a “crazy idea” to a company

Kasseris first came to MIT to study mechanical engineering as a graduate student in 2004, when he worked in the Sloan Automotive Lab on a report on the future of transportation. For his PhD, he developed a novel technology for improving internal combustion engine fuel efficiency for a consortium of automotive and energy companies, which he then went to work for after graduation.

Around 2014, he was approached by Leslie Bromberg ’73, PhD ’77, a serial inventor with more than 100 patents, who has been a principal research engineer in MIT’s Plasma Science and Fusion Center for nearly 50 years.

“Leslie had this crazy idea of repurposing an internal combustion engine as a reactor,” Kasseris recalls. “I had looked at that while working in industry, and I liked it, but my company at the time thought the work needed more validation.”

Bromberg had done that validation through a U.S. Department of Energy-funded project in which he used a diesel engine to “reform” methane — a high-pressure chemical reaction in which methane is combined with steam and oxygen to produce hydrogen. The work impressed Kasseris enough to bring him back to MIT as a research scientist in 2016.

“We worked on that idea in addition to some other projects, and eventually it had reached the point where we decided to license the work from MIT and go full throttle,” Kasseris recalls. “It’s very easy to work with MIT’s Technology Licensing Office when you are an MIT inventor. You can get a low-cost licensing option, and you can do a lot with that, which is important for a new company. Then, once you are ready, you can finalize the license, so MIT was instrumental.”

Emvolon continued working with MIT’s research community, sponsoring projects with Professor Emeritus John Heywood and participating in the MIT Venture Mentoring Service and the MIT Industrial Liaison Program.

An engine-powered chemical plant

At the core of Emvolon’s system is an off-the-shelf automotive engine that runs “fuel rich” — with a higher ratio of fuel to air than what is needed for complete combustion.

“That’s easy to say, but it takes a lot of [intellectual property], and that’s what was developed at MIT,” Kasseris says. “Instead of burning the methane in the gas to carbon dioxide and water, you partially burn it, or partially oxidize it, to carbon monoxide and hydrogen, which are the building blocks to synthesize a variety of chemicals.”

The hydrogen and carbon monoxide are intermediate products used to synthesize different chemicals through further reactions. Those processing steps take place right next to the engine, which makes its own power. Each of Emvolon’s standalone systems fits within a 40-foot shipping container and can produce about 8 tons of methanol per day from 300,000 standard cubic feet of methane gas.

The company is starting with green methanol because it’s an ideal fuel for hard-to-abate sectors such as shipping and heavy-duty transport, as well as an excellent feedstock for other high-value chemicals, such as sustainable aviation fuel. Many shipping vessels have already converted to run on green methanol in an effort to meet decarbonization goals.

This summer, the company also received a grant from the Department of Energy to adapt its process to produce clean liquid fuels from power sources like solar and wind.

“We’d like to expand to other chemicals like ammonia, but also other feedstocks, such as biomass and hydrogen from renewable electricity, and we already have promising results in that direction” Kasseris says. “We think we have a good solution for the energy transition and, in the later stages of the transition, for e-manufacturing.”

A scalable approach

Emvolon has already built a system capable of producing up to six barrels of green methanol a day in its 5,000 square-foot headquarters in Woburn, Massachusetts.

“For chemical technologies, people talk about scale up risk, but with an engine, if it works in a single cylinder, we know it will work in a multicylinder engine,” Kasseris says. “It’s just engineering.”

Last month, Emvolon announced an agreement with Montauk Renewables to build a commercial-scale demonstration unit next to a Texas landfill that will initially produce up to 15,000 gallons of green methanol a year and later scale up to 2.5 million gallons. That project could be expanded tenfold by scaling across Montauk’s other sites.

“Our whole process was designed to be a very realistic approach to the energy transition,” Kasseris says. “Our solution is designed to produce green fuels and chemicals at prices that the markets are willing to pay today, without the need for subsidies. Using the engines as chemical plants, we can get the capital expenditure per unit output close to that of a large plant, but at a modular scale that enables us to be next to low-cost feedstock. Furthermore, our modular systems require small investments — of $1 to 10 million — that are quickly deployed, one at a time, within weeks, as opposed to massive chemical plants that require multiyear capital construction projects and cost hundreds of millions.”

Curiosity, images, and scientific exploration

When we gaze at nature’s remarkable phenomena, we might feel a mix of awe, curiosity, and determination to understand what we are looking at. That is certainly a common response for MIT’s Alan Lightman, a trained physicist and prolific author of books about physics, science, and our understanding of the world around us.

“One of my favorite quotes from Einstein is to the effect that the most beautiful experience we can have is the mysterious,” Lightman says. “It’s the fundamental emotion that is the cradle of true art and true science.”

Lightman explores those concepts in his latest book, “The Miraculous from the Material,” published today by Penguin Random House. In it, Lightman has penned 35 essays about scientific understanding, each following photos of spectacular natural phenomena, from spider webs to sunsets, and from galaxies to hummingbirds.

Lightman, who is a professor of the practice of the humanities at MIT, calls himself a “spiritual materialist,” who finds wonder in the world while grounding his grasp of nature in scientific explanation.

Collage of images including the northern lights, fall foliage, Mandarinfish, Saturn, Paramecia, and geese flying

Alan Lightman offers essays about scientific understanding, each corresponding to photos of spectacular natural phenomena, including the aurora borealis, fall foliage, and the rings of Saturn.

Credit: Courtesy of Alan Lightman


“Understanding the material and scientific underpinnings of these spectacular phenomena hasn’t diminished my awe and amazement one iota,” Lightman writes in the book. MIT News talked to Lightman about a handful of the book’s chapters, and the relationship between seeing and scientific curiosity.

Aurora borealis

In 2024, many people ventured outside for a glimpse of the aurora borealis, or northern lights, the brilliant phenomenon caused by solar storms. Auroras occur when unusually large amounts of electrons from the sun energize oxygen and nitrogen molecules in the upper atmosphere. The Earth’s magnetic field creates the folding shapes.

Among much else, the aurora borealis — and aurora australis, in southern latitudes — are a testament to the way unusual things fire our curiosity.

“I think we respond emotionally as well as intellectually, with appreciation and plain old awe at nature,” Lightman says. “If we go back to the earliest times when people were thinking scientifically, the emotional connection to the natural world was probably as important as the intellectual connection. The wonder and curiosity stimulated by the night sky makes us want to understand it.”

He adds: “The aurora borealis is certainly very striking and makes us aware that we’re part of the cosmos; we’re not just living in the world of tables, and chairs, and houses. It does give us a cosmic sense of being on a planet out in the universe.”

Galileo coined the term “aurora borealis,” referring to the Roman goddess of the dawn and the Greek god of the north wind. People have created many suggestive accounts of the northern lights. As Lightman notes in the book, the Native American Cree regarded the lights as dead spirits in the sky; the Algonquin people saw them as a fire made by their creator; the Inuit tribes regarded the lights as spirits playing; and to the Vikings, the lights were a reflection off the armor of the Valkyries. It wasn’t until the 1900s that geomagnetic sunstorms were proposed as an explanation.

“It’s all a search for meaning and understanding,” Lightman says. “Before we had modern science, we still wanted meaning, so we constructed these mythologies. And then as we developed science we had other tools. But the nonscientific accounts were also trying to explain this strange cosmos we find ourselves in.”

Fall foliage

The aurora borealis is unearthly; fall leaves and their colors are literally a down-to-earth matter. Still, Lightman says, while the aurora borealis “is more exotic,” fall foliage can also leave us gazing in wonder. In his book, he constructs a multilayered explanation of the subject, ranging from the chemical compounds in leaves to the properties of color to the mechanics of planetary motion.

First, the leaves. The fall hues come from chemical compounds in leaves called carotenoids (which produce yellow and orange colors) and anthocyanins (which create red hues). Those effects are usually hidden because of the presence of chlorophyll, which helps plants absorb sunlight and store energy, and gives off a green hue. But less sunlight in the fall means less chlorophyll at work in plants, so green leaves turn yellow, orange, or red.

To jump ahead, there are seasons because the Earth does not rotate on a vertical axis relative to the plane of its path around the sun. It tilts at about 23.5 degrees, so different parts of the planet receive differing amounts of sunlight during a yearlong revolution around the sun.

That tilt stems from cosmic collisions billions of years ago. Solar systems are formed from rotating clouds of gas and dust, with planets and moons condensing due to gravity. The Earth likely got knocked off its vertical axis when loose matter slammed into it, which has happened to most planets: In our solar system, only Mercury has almost no tilt.

Lightman muses, “I think there’s a kind of poetry in understanding that beautiful fall foliage was caused in part by a cosmic accident 4 billion years ago. That’s poetic and mind-blowing at the same time.”

Mandarinfish

It can seem astonishing to behold the mandarinfish, a native of the Pacific Ocean that sports bright color patterns only a bit less intricate than an ikat rug.

But what appears nearly miraculous can also be highly explainable in material terms. There are evolutionary advantages from brilliant coloration, something many scientists have recognized, from Charles Darwin to the present.

“There are a number of living organisms in the book that have striking features,” Lightman says. “I think scientists agree that most features of living organisms have  some survival benefits, or are byproducts of features that once had survival benefits.”

Unusual coloration may serve as camouflage, help attract mates, or warn off predators. In this case, the mandarinfish is toxic and its spectacular coat helps remind its main predator, the scorpionfish, that the wrong snack comes with unfortunate consequences.

“For mandarinfish it’s related to the fact that it’s poisonous,” Lightman says. Here, the sense of wonder we may feel comes attached to a scientific mechanism: In a food chain, what is spectacular can be highly functional as well.

Paramecia

Paramecia are single-celled microorganisms that propel themselves thanks to thousands of tiny cilia, or hairs, which move back and forth like oars. People first observed paramecia after the development of the microscope in the 1600s; they may have been first seen by the Dutch scientist Antonie van Leeuwenhoek.

“I judged that some of these little creatures were about a thousand times smaller than the smallest ones I have ever yet seen upon the rind of cheese,” van Leeuwenhoek wrote.

“The first microscopes in the 17th century uncovered an entire universe at a tiny scale,” Lightman observes.

When we look at a picture of a paramecium, then, we are partly observing our own ingenuity. However, Lightman is most focused on paramecia as an evolutionary advance. In the book, he underscores the emerging sophistication represented by their arrival 600 million years ago, processing significant amounts of energy and applying it to motion.

“What interested me about the paramecium is not only that it was one of the first microorganisms discovered,” Lightman says, “but the mechanisms of its locomotion, the little cilia that wave back and forth and can propel it at relatively great speed. That was a big landmark in evolution. It requires energy, and a mechanical system, all developed by natural selection.”

He adds: “One beautiful thought that comes out of that is the commonality of all living things on the planet. We’re all related, in a very profound way.”

The rings of Saturn

The first time Lightman looked at the rings of Saturn, which are about 1,000 in number, he was at the Harvard-Smithsonian Center for Astrophysics, using a telescope in the late 1970s.

“I saw the rings of Saturn and I was totally blown away because they’re so perfect,” Lightman says. “I just couldn’t believe there was that kind of construction of such a huge scale. That sense of amazement has stayed with me. They are a visually stunning natural phenomenon.”

The rings are statistically stunning, too. The width of the rings is about 240,000 miles, roughly the same as the distance from the Earth to the moon. But the thickness of the rings is only about that of a football field. “That’s a pretty big ratio between diameter and thickness,” Lightman says. The mass of the rings is just 1/50 of 1 percent of our moon.

Most likely, the rings were formed from matter by a moon that approached Saturn — which has 146 known moons — but got ripped apart, its material scattering into the rings. Over time, gravity pulled the rings into their circular shape.

“The roundness of planets, the circularity of planetary rings, and so many other beautiful phenomena follow naturally from the laws of physics,” Lightman writes in the book. “Which are themselves beautiful.”

Over the years, he has been able to look many times at the rings of Saturn, always regarding it as a “natural miracle” to behold.

“Every time you see them, you are amazed by it,” Lightman says.  

A cell protector collaborates with a killer

From early development to old age, cell death is a part of life. Without enough of a critical type of cell death known as apoptosis, animals wind up with too many cells, which can set the stage for cancer or autoimmune disease. But careful control is essential, because when apoptosis eliminates the wrong cells, the effects can be just as dire, helping to drive many kinds of neurodegenerative disease.

By studying the microscopic roundworm Caenorhabditis elegans — which was honored with its fourth Nobel Prize last month — scientists at MIT’s McGovern Institute for Brain Research have begun to unravel a longstanding mystery about the factors that control apoptosis: how a protein capable of preventing programmed cell death can also promote it. Their study, led by Robert Horvitz, the David H. Koch Professor of Biology at MIT, and reported Oct. 9 in the journal Science Advances, sheds light on the process of cell death in both health and disease.

“These findings, by graduate student Nolan Tucker and former graduate student, now MIT faculty colleague, Peter Reddien, have revealed that a protein interaction long thought to block apoptosis in C. elegans likely instead has the opposite effect,” says Horvitz, who is also an investigator at the Howard Hughes Medical Institute and the McGovern Institute. Horvitz shared the 2002 Nobel Prize in Physiology or Medicine for discovering and characterizing the genes controlling cell death in C. elegans.

Mechanisms of cell death

Horvitz, Tucker, Reddien, and colleagues have provided foundational insights in the field of apoptosis by using C. elegans to analyze the mechanisms that drive apoptosis, as well as the mechanisms that determine how cells ensure apoptosis happens when and where it should. Unlike humans and other mammals, which depend on dozens of proteins to control apoptosis, these worms use just a few. And when things go awry, it’s easy to tell: When there’s not enough apoptosis, researchers can see that there are too many cells inside the worms’ translucent bodies. And when there’s too much, the worms lack certain biological functions or, in more extreme cases, can’t reproduce or die during embryonic development.

Work in the Horvitz lab defined the roles of many of the genes and proteins that control apoptosis in worms. These regulators proved to have counterparts in human cells, and for that reason studies of worms have helped reveal how human cells govern cell death and pointed toward potential targets for treating disease.

A protein’s dual role

Three of C. elegans’ primary regulators of apoptosis actively promote cell death, whereas just one, CED-9, reins in the apoptosis-promoting proteins to keep cells alive. As early as the 1990s, however, Horvitz and colleagues recognized that CED-9 was not exclusively a protector of cells. Their experiments indicated that the protector protein also plays a role in promoting cell death. But while researchers thought they knew how CED-9 protected against apoptosis, its pro-apoptotic role was more puzzling.

CED-9’s dual role means that mutations in the gene that encode it can impact apoptosis in multiple ways. Most ced-9 mutations interfere with the protein’s ability to protect against cell death and result in excess cell death. Conversely, mutations that abnormally activate ced-9 cause too little cell death, just like mutations that inactivate any of the three killer genes.

An atypical ced-9 mutation, identified by Reddien when he was a PhD student in Horvitz’s lab, hinted at how CED-9 promotes cell death. That mutation altered the part of the CED-9 protein that interacts with the protein CED-4, which is proapoptotic. Since the mutation specifically leads to a reduction in apoptosis, this suggested that CED-9 might need to interact with CED-4 to promote cell death.

The idea was particularly intriguing because researchers had long thought that CED-9’s interaction with CED-4 had exactly the opposite effect: In the canonical model, CED-9 anchors CED-4 to cells’ mitochondria, sequestering the CED-4 killer protein and preventing it from associating with and activating another key killer, the CED-3 protein — thereby preventing apoptosis.

To test the hypothesis that CED-9’s interactions with the killer CED-4 protein enhance apoptosis, the team needed more evidence. So graduate student Nolan Tucker used CRISPR gene editing tools to create more worms with mutations in CED-9, each one targeting a different spot in the CED-4-binding region. Then he examined the worms. “What I saw with this particular class of mutations was extra cells and viability,” he says — clear signs that the altered CED-9 was still protecting against cell death, but could no longer promote it. “Those observations strongly supported the hypothesis that the ability to bind CED-4 is needed for the pro-apoptotic function of CED-9,” Tucker explains. Their observations also suggested that, contrary to earlier thinking, CED-9 doesn’t need to bind with CED-4 to protect against apoptosis.

When he looked inside the cells of the mutant worms, Tucker found additional evidence that these mutations prevented CED-9’s ability to interact with CED-4. When both CED-9 and CED-4 are intact, CED-4 appears associated with cells’ mitochondria. But in the presence of these mutations, CED-4 was instead at the edge of the cell nucleus. CED-9’s ability to bind CED-4 to mitochondria appeared to be necessary to promote apoptosis, not to protect against it.

Looking ahead

While the team’s findings begin to explain a long-unanswered question about one of the primary regulators of apoptosis, they raise new ones, as well. “I think that this main pathway of apoptosis has been seen by a lot of people as more-or-less settled science. Our findings should change that view,” Tucker says.

The researchers see important parallels between their findings from this study of worms and what’s known about cell death pathways in mammals. The mammalian counterpart to CED-9 is a protein called BCL-2, mutations in which can lead to cancer.  BCL-2, like CED-9, can both promote and protect against apoptosis. As with CED-9, the pro-apoptotic function of BCL-2 has been mysterious. In mammals, too, mitochondria play a key role in activating apoptosis. The Horvitz lab’s discovery opens opportunities to better understand how apoptosis is regulated not only in worms but also in humans, and how dysregulation of apoptosis in humans can lead to such disorders as cancer, autoimmune disease, and neurodegeneration.

MIT physicists predict exotic form of matter with potential for quantum computing

MIT physicists have shown that it should be possible to create an exotic form of matter that could be manipulated to form the qubit (quantum bit) building blocks of future quantum computers that are even more powerful than the quantum computers in development today.

The work builds on a discovery last year of materials that host electrons that can split into fractions of themselves but, importantly, can do so without the application of a magnetic field. 

The general phenomenon of electron fractionalization was first discovered in 1982 and resulted in a Nobel Prize. That work, however, required the application of a magnetic field. The ability to create the fractionalized electrons without a magnetic field opens new possibilities for basic research and makes the materials hosting them more useful for applications.

When electrons split into fractions of themselves, those fractions are known as anyons. Anyons come in variety of flavors, or classes. The anyons discovered in the 2023 materials are known as Abelian anyons. Now, in a paper reported in the Oct. 17 issue of Physical Review Letters, the MIT team notes that it should be possible to create the most exotic class of anyons, non-Abelian anyons.

“Non-Abelian anyons have the bewildering capacity of ‘remembering’ their spacetime trajectories; this memory effect can be useful for quantum computing,” says Liang Fu, a professor in MIT’s Department of Physics and leader of the work. 

Fu further notes that “the 2023 experiments on electron fractionalization greatly exceeded theoretical expectations. My takeaway is that we theorists should be bolder.”

Fu is also affiliated with the MIT Materials Research Laboratory. His colleagues on the current work are graduate students Aidan P. Reddy and Nisarga Paul, and postdoc Ahmed Abouelkomsan, all of the MIT Department of Phsyics. Reddy and Paul are co-first authors of the Physical Review Letters paper.

The MIT work and two related studies were also featured in an Oct. 17 story in Physics Magazine. “If this prediction is confirmed experimentally, it could lead to more reliable quantum computers that can execute a wider range of tasks … Theorists have already devised ways to harness non-Abelian states as workable qubits and manipulate the excitations of these states to enable robust quantum computation,” writes Ryan Wilkinson.

The current work was guided by recent advances in 2D materials, or those consisting of only one or a few layers of atoms. “The whole world of two-dimensional materials is very interesting because you can stack them and twist them, and sort of play Legos with them to get all sorts of cool sandwich structures with unusual properties,” says Paul. Those sandwich structures, in turn, are called moiré materials.

Anyons can only form in two-dimensional materials. Could they form in moiré materials? The 2023 experiments were the first to show that they can. Soon afterwards, a group led by Long Ju, an MIT assistant professor of physics, reported evidence of anyons in another moiré material. (Fu and Reddy were also involved in the Ju work.)

In the current work, the physicists showed that it should be possible to create non-Abelian anyons in a moiré material composed of atomically thin layers of molybdenum ditelluride. Says Paul, “moiré materials have already revealed fascinating phases of matter in recent years, and our work shows that non-Abelian phases could be added to the list.”

Adds Reddy, “our work shows that when electrons are added at a density of 3/2 or 5/2 per unit cell, they can organize into an intriguing quantum state that hosts non-Abelian anyons.”

The work was exciting, says Reddy, in part because “oftentimes there’s subtlety in interpreting your results and what they are actually telling you. So it was fun to think through our arguments” in support of non-Abelian anyons.

Says Paul, “this project ranged from really concrete numerical calculations to pretty abstract theory and connected the two. I learned a lot from my collaborators about some very interesting topics.”

This work was supported by the U.S. Air Force Office of Scientific Research. The authors also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center, the Kavli Institute for Theoretical Physics, the Knut and Alice Wallenberg Foundation, and the Simons Foundation.

How can electrons can split into fractions of themselves?

MIT physicists have taken a key step toward solving the puzzle of what leads electrons to split into fractions of themselves. Their solution sheds light on the conditions that give rise to exotic electronic states in graphene and other two-dimensional systems.

The new work is an effort to make sense of a discovery that was reported earlier this year by a different group of physicists at MIT, led by Assistant Professor Long Ju. Ju’s team found that electrons appear to exhibit “fractional charge” in pentalayer graphene — a configuration of five graphene layers that are stacked atop a similarly structured sheet of boron nitride.

Ju discovered that when he sent an electric current through the pentalayer structure, the electrons seemed to pass through as fractions of their total charge, even in the absence of a magnetic field. Scientists had already shown that electrons can split into fractions under a very strong magnetic field, in what is known as the fractional quantum Hall effect. Ju’s work was the first to find that this effect was possible in graphene without a magnetic field — which until recently was not expected to exhibit such an effect.

The phenemonon was coined the “fractional quantum anomalous Hall effect,” and theorists have been keen to find an explanation for how fractional charge can emerge from pentalayer graphene.

The new study, led by MIT professor of physics Senthil Todadri, provides a crucial piece of the answer. Through calculations of quantum mechanical interactions, he and his colleagues show that the electrons form a sort of crystal structure, the properties of which are ideal for fractions of electrons to emerge.

“This is a completely new mechanism, meaning in the decades-long history, people have never had a system go toward these kinds of fractional electron phenomena,” Todadri says. “It’s really exciting because it makes possible all kinds of new experiments that previously one could only dream about.”

The team’s study appeared last week in the journal Physical Review Letters. Two other research teams — one from Johns Hopkins University, and the other from Harvard University, the University of California at Berkeley, and Lawrence Berkeley National Laboratory  — have each published similar results in the same issue. The MIT team includes Zhihuan Dong PhD ’24 and former postdoc Adarsh Patri.

“Fractional phenomena”

In 2018, MIT professor of physics Pablo Jarillo-Herrero and his colleagues were the first to observe that new electronic behavior could emerge from stacking and twisting two sheets of graphene. Each layer of graphene is as thin as a single atom and structured in a chicken-wire lattice of hexagonal carbon atoms. By stacking two sheets at a very specific angle to each other, he found that the resulting interference, or moiré pattern, induced unexpected phenomena such as both superconducting and insulating properties in the same material. This “magic-angle graphene,” as it was soon coined, ignited a new field known as twistronics, the study of electronic behavior in twisted, two-dimensional materials.

“Shortly after his experiments, we realized these moiré systems would be ideal platforms in general to find the kinds of conditions that enable these fractional electron phases to emerge,” says Todadri, who collaborated with Jarillo-Herrero on a study that same year to show that, in theory, such twisted systems could exhibit fractional charge without a magnetic field. “We were advocating these as the best systems to look for these kinds of fractional phenomena,” he says.

Then, in September of 2023, Todadri hopped on a Zoom call with Ju, who was familiar with Todari’s theoretical work and had kept in touch with him through Ju’s own experimental work.

“He called me on a Saturday and showed me the data in which he saw these [electron] fractions in pentalayer graphene,” Todadri recalls. “And that was a big surprise because it didn’t play out the way we thought.”

In his 2018 paper, Todadri predicted that fractional charge should emerge from a precursor phase characterized by a particular twisting of the electron wavefunction. Broadly speaking, he theorized that an electron’s quantum properties should have a certain twisting, or degree to which it can be manipulated without changing its inherent structure. This winding, he predicted, should increase with the number of graphene layers added to a given moiré structure.

“For pentalayer graphene, we thought the wavefunction would wind around five times, and that would be a precursor for electron fractions,” Todadri says. “But he did his experiments and discovered that it does wind around, but only once. That then raised this big question: How should we think about whatever we are seeing?”

Extraordinary crystal

In the team’s new study, Todadri went back to work out how electron fractions could emerge from pentalayer graphene if not through the path he initially predicted. The physicists looked through their original hypothesis and realized they may have missed a key ingredient.

“The standard strategy in the field when figuring out what’s happening in any electronic system is to treat electrons as independent actors, and from that, figure out their topology, or winding,” Todadri explains. “But from Long’s experiments, we knew this approximation must be incorrect.”

While in most materials, electrons have plenty of space to repel each other and zing about as independent agents, the particles are much more confined in two-dimensional structures such as pentalayer graphene. In such tight quarters, the team realized that electrons should also be forced to interact, behaving according to their quantum correlations in addition to their natural repulsion. When the physicists added interelectron interactions to their theory, they found it correctly predicted the winding that Ju observed for pentalayer graphene.

Once they had a theoretical prediction that matched with observations, the team could work from this prediction to identify a mechanism by which pentalayer graphene gave rise to fractional charge.

They found that the moiré arrangement of pentalayer graphene, in which each lattice-like layer of carbon atoms is arranged atop the other and on top of the boron-nitride, induces a weak electrical potential. When electrons pass through this potential, they form a sort of crystal, or a periodic formation, that confines the electrons and forces them to interact through their quantum correlations. This electron tug-of-war creates a sort of cloud of possible physical states for each electron, which interacts with every other electron cloud in the crystal, in a wavefunction, or a pattern of quantum correlations, that gives the winding that should set the stage for electrons to split into fractions of themselves.

“This crystal has a whole set of unusual properties that are different from ordinary crystals, and leads to many fascinating questions for future research,” Todadri says. “For the short term, this mechanism provides the theoretical foundation for understanding the observations of fractions of electrons in pentalayer graphene and for predicting other systems with similar physics.”

This work was supported, in part, by the National Science Foundation and the Simons Foundation. 

Is AI Making Jobs Harder? Not for Hourly Workers

Has AI forever changed the way we work? That depends on which “AI” you’re talking about. Artificial Intelligence describes a wide set of computing technologies that perform various functions. It’s not uncommon to have multiple types of AI in use within the same workplace – or…

Matthew Ikle, Chief Science Officer at SingularityNet – Interview Series

Matthew Ikle is the  Chief Science Officer at SingularityNET, a company founded with the mission of creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence. An ‘AGI’ that is not dependent on any central entity, that is open for anyone and not restricted to the…