Fast control methods enable record-setting fidelity in superconducting qubit

Quantum computing promises to solve complex problems exponentially faster than a classical computer, by using the principles of quantum mechanics to encode and manipulate information in quantum bits (qubits).

Qubits are the building blocks of a quantum computer. One challenge to scaling, however, is that qubits are highly sensitive to background noise and control imperfections, which introduce errors into the quantum operations and ultimately limit the complexity and duration of a quantum algorithm. To improve the situation, MIT researchers and researchers worldwide have continually focused on improving qubit performance. 

In new work, using a superconducting qubit called fluxonium, MIT researchers in the Department of Physics, the Research Laboratory of Electronics (RLE), and the Department of Electrical Engineering and Computer Science (EECS) developed two new control techniques to achieve a world-record single-qubit fidelity of 99.998 percent. This result complements then-MIT researcher Leon Ding’s demonstration last year of a 99.92 percent two-qubit gate fidelity

The paper’s senior authors are David Rower PhD ’24, a recent physics postdoc in MIT’s Engineering Quantum Systems (EQuS) group and now a research scientist at the Google Quantum AI laboratory; Leon Ding PhD ’23 from EQuS, now leading the Calibration team at Atlantic Quantum; and William D. Oliver, the Henry Ellis Warren Professor of EECS and professor of physics, leader of EQuS, director of the Center for Quantum Engineering, and RLE associate director. The paper recently appeared in the journal PRX Quantum.

Decoherence and counter-rotating errors

A major challenge with quantum computation is decoherence, a process by which qubits lose their quantum information. For platforms such as superconducting qubits, decoherence stands in the way of realizing higher-fidelity quantum gates.

Quantum computers need to achieve high gate fidelities in order to implement sustained computation through protocols like quantum error correction. The higher the gate fidelity, the easier it is to realize practical quantum computing.

MIT researchers are developing techniques to make quantum gates, the basic operations of a quantum computer, as fast as possible in order to reduce the impact of decoherence. However, as gates get faster, another type of error, arising from counter-rotating dynamics, can be introduced because of the way qubits are controlled using electromagnetic waves. 

Single-qubit gates are usually implemented with a resonant pulse, which induces Rabi oscillations between the qubit states. When the pulses are too fast, however, “Rabi gates” are not so consistent, due to unwanted errors from counter-rotating effects. The faster the gate, the more the counter-rotating error is manifest. For low-frequency qubits such as fluxonium, counter-rotating errors limit the fidelity of fast gates.

“Getting rid of these errors was a fun challenge for us,” says Rower. “Initially, Leon had the idea to utilize circularly polarized microwave drives, analogous to circularly polarized light, but realized by controlling the relative phase of charge and flux drives of a superconducting qubit. Such a circularly polarized drive would ideally be immune to counter-rotating errors.”

While Ding’s idea worked immediately, the fidelities achieved with circularly polarized drives were not as high as expected from coherence measurements.

“Eventually, we stumbled on a beautifully simple idea,” says Rower. “If we applied pulses at exactly the right times, we should be able to make counter-rotating errors consistent from pulse-to-pulse. This would make the counter-rotating errors correctable. Even better, they would be automatically accounted for with our usual Rabi gate calibrations!”

They called this idea “commensurate pulses,” since the pulses needed to be applied at times commensurate with intervals determined by the qubit frequency through its inverse, the time period. Commensurate pulses are defined simply by timing constraints and can be applied to a single linear qubit drive. In contrast, circularly polarized microwaves require two drives and some extra calibration.

“I had much fun developing the commensurate technique,” says Rower. “It was simple, we understood why it worked so well, and it should be portable to any qubit suffering from counter-rotating errors!”

“This project makes it clear that counter-rotating errors can be dealt with easily. This is a wonderful thing for low-frequency qubits such as fluxonium, which are looking more and more promising for quantum computing.”

Fluxonium’s promise

Fluxonium is a type of superconducting qubit made up of a capacitor and Josephson junction; unlike transmon qubits, however, fluxonium also includes a large “superinductor,” which by design helps protect the qubit from environmental noise. This results in performing logical operations, or gates, with greater accuracy.

Despite having higher coherence, however, fluxonium has a lower qubit frequency that is generally associated with proportionally longer gates.

“Here, we’ve demonstrated a gate that is among the fastest and highest-fidelity across all superconducting qubits,” says Ding. “Our experiments really show that fluxonium is a qubit that supports both interesting physical explorations and also absolutely delivers in terms of engineering performance.”

With further research, they hope to reveal new limitations and yield even faster and higher-fidelity gates.

“Counter-rotating dynamics have been understudied in the context of superconducting quantum computing because of how well the rotating-wave approximation holds in common scenarios,” says Ding. “Our paper shows how to precisely calibrate fast, low-frequency gates where the rotating-wave approximation does not hold.”

Physics and engineering team up

“This is a wonderful example of the type of work we like to do in EQuS, because it leverages fundamental concepts in both physics and electrical engineering to achieve a better outcome,” says Oliver. “It builds on our earlier work with non-adiabatic qubit control, applies it to a new qubit — fluxonium — and makes a beautiful connection with counter-rotating dynamics.”

The science and engineering teams enabled the high fidelity in two ways. First, the team demonstrated “commensurate” (synchronous) non-adiabatic control, which goes beyond the standard “rotating wave approximation” of standard Rabi approaches. This leverages ideas that won the 2023 Nobel Prize in Physics for ultrafast “attosecond” pulses of light.

Secondly, they demonstrated it using an analog to circularly polarized light. Rather than a physical electromagnetic field with a rotating polarization vector in real x-y space, they realized a synthetic version of circularly polarized light using the qubit’s x-y space, which in this case corresponds to its magnetic flux and electric charge.

The combination of a new take on an existing qubit design (fluxonium) and the application of advanced control methods applied to an understanding of the underlying physics enabled this result.

Platform-independent and requiring no additional calibration overhead, this work establishes straightforward strategies for mitigating counter-rotating effects from strong drives in circuit quantum electrodynamics and other platforms, which the researchers expect to be helpful in the effort to realize high-fidelity control for fault-tolerant quantum computing.

Adds Oliver, “With the recent announcement of Google’s Willow quantum chip that demonstrated quantum error correction beyond threshold for the first time, this is a timely result, as we have pushed performance even higher. Higher-performant qubits will lead to lower overhead requirements for implementing error correction.”  

Other researchers on the paper are RLE’s Helin ZhangMax Hays, Patrick M. Harrington, Ilan T. RosenSimon GustavssonKyle SerniakJeffrey A. Grover, and Junyoung An, who is also with EECS; and MIT Lincoln Laboratory’s Jeffrey M. Gertler, Thomas M. Hazard, Bethany M. Niedzielski, and Mollie E. Schwartz.

This research was funded, in part, by the U.S. Army Research Office, the U.S. Department of Energy Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage, U.S. Air Force, the U.S. Office of the Director of National Intelligence, and the U.S. National Science Foundation.  

New computational chemistry techniques accelerate the prediction of molecules and materials

Back in the old days — the really old days — the task of designing materials was laborious. Investigators, over the course of 1,000-plus years, tried to make gold by combining things like lead, mercury, and sulfur, mixed in what they hoped would be just the right proportions. Even famous scientists like Tycho Brahe, Robert Boyle, and Isaac Newton tried their hands at the fruitless endeavor we call alchemy.

Materials science has, of course, come a long way. For the past 150 years, researchers have had the benefit of the periodic table of elements to draw upon, which tells them that different elements have different properties, and one can’t magically transform into another. Moreover, in the past decade or so, machine learning tools have considerably boosted our capacity to determine the structure and physical properties of various molecules and substances. New research by a group led by Ju Li — the Tokyo Electric Power Company Professor of Nuclear Engineering at MIT and professor of materials science and engineering — offers the promise of a major leap in capabilities that can facilitate materials design. The results of their investigation are reported in a December 2024 issue of Nature Computational Science.

At present, most of the machine-learning models that are used to characterize molecular systems are based on density functional theory (DFT), which offers a quantum mechanical approach to determining the total energy of a molecule or crystal by looking at the electron density distribution — which is, basically, the average number of electrons located in a unit volume around each given point in space near the molecule. (Walter Kohn, who co-invented this theory 60 years ago, received a Nobel Prize in Chemistry for it in 1998.) While the method has been very successful, it has some drawbacks, according to Li: “First, the accuracy is not uniformly great. And, second, it only tells you one thing: the lowest total energy of the molecular system.”

“Couples therapy” to the rescue

His team is now relying on a different computational chemistry technique, also derived from quantum mechanics, known as coupled-cluster theory, or CCSD(T). “This is the gold standard of quantum chemistry,” Li comments. The results of CCSD(T) calculations are much more accurate than what you get from DFT calculations, and they can be as trustworthy as those currently obtainable from experiments. The problem is that carrying out these calculations on a computer is very slow, he says, “and the scaling is bad: If you double the number of electrons in the system, the computations become 100 times more expensive.” For that reason, CCSD(T) calculations have normally been limited to molecules with a small number of atoms — on the order of about 10. Anything much beyond that would simply take too long.

That’s where machine learning comes in. CCSD(T) calculations are first performed on conventional computers, and the results are then used to train a neural network with a novel architecture specially devised by Li and his colleagues. After training, the neural network can perform these same calculations much faster by taking advantage of approximation techniques. What’s more, their neural network model can extract much more information about a molecule than just its energy. “In previous work, people have used multiple different models to assess different properties,” says Hao Tang, an MIT PhD student in materials science and engineering. “Here we use just one model to evaluate all of these properties, which is why we call it a ‘multi-task’ approach.”

The “Multi-task Electronic Hamiltonian network,” or MEHnet, sheds light on a number of electronic properties, such as the dipole and quadrupole moments, electronic polarizability, and the optical excitation gap — the amount of energy needed to take an electron from the ground state to the lowest excited state. “The excitation gap affects the optical properties of materials,” Tang explains, “because it determines the frequency of light that can be absorbed by a molecule.” Another advantage of their CCSD-trained model is that it can reveal properties of not only ground states, but also excited states. The model can also predict the infrared absorption spectrum of a molecule related to its vibrational properties, where the vibrations of atoms within a molecule are coupled to each other, leading to various collective behaviors.

The strength of their approach owes a lot to the network architecture. Drawing on the work of MIT Assistant Professor Tess Smidt, the team is utilizing a so-called E(3)-equivariant graph neural network, says Tang, “in which the nodes represent atoms and the edges that connect the nodes represent the bonds between atoms. We also use customized algorithms that incorporate physics principles — related to how people calculate molecular properties in quantum mechanics — directly into our model.”

Testing, 1, 2 3

When tested on its analysis of known hydrocarbon molecules, the model of Li et al. outperformed DFT counterparts and closely matched experimental results taken from the published literature.

Qiang Zhu — a materials discovery specialist at the University of North Carolina at Charlotte (who was not part of this study) — is impressed by what’s been accomplished so far. “Their method enables effective training with a small dataset, while achieving superior accuracy and computational efficiency compared to existing models,” he says. “This is exciting work that illustrates the powerful synergy between computational chemistry and deep learning, offering fresh ideas for developing more accurate and scalable electronic structure methods.”

The MIT-based group applied their model first to small, nonmetallic elements — hydrogen, carbon, nitrogen, oxygen, and fluorine, from which organic compounds can be made — and has since moved on to examining heavier elements: silicon, phosphorus, sulfur, chlorine, and even platinum. After being trained on small molecules, the model can be generalized to bigger and bigger molecules. “Previously, most calculations were limited to analyzing hundreds of atoms with DFT and just tens of atoms with CCSD(T) calculations,” Li says. “Now we’re talking about handling thousands of atoms and, eventually, perhaps tens of thousands.”

For now, the researchers are still evaluating known molecules, but the model can be used to characterize molecules that haven’t been seen before, as well as to predict the properties of hypothetical materials that consist of different kinds of molecules. “The idea is to use our theoretical tools to pick out promising candidates, which satisfy a particular set of criteria, before suggesting them to an experimentalist to check out,” Tang says.

It’s all about the apps

Looking ahead, Zhu is optimistic about the possible applications. “This approach holds the potential for high-throughput molecular screening,” he says. “That’s a task where achieving chemical accuracy can be essential for identifying novel molecules and materials with desirable properties.”

Once they demonstrate the ability to analyze large molecules with perhaps tens of thousands of atoms, Li says, “we should be able to invent new polymers or materials” that might be used in drug design or in semiconductor devices. The examination of heavier transition metal elements could lead to the advent of new materials for batteries — presently an area of acute need.

The future, as Li sees it, is wide open. “It’s no longer about just one area,” he says. “Our ambition, ultimately, is to cover the whole periodic table with CCSD(T)-level accuracy, but at lower computational cost than DFT. This should enable us to solve a wide range of problems in chemistry, biology, and materials science. It’s hard to know, at present, just how wide that range might be.”

This work was supported by the Honda Research Institute. Hao Tang acknowledges support from the Mathworks Engineering Fellowship. The calculations in this work were performed, in part, on the Matlantis high-speed universal atomistic simulator, the Texas Advanced Computing Center, the MIT SuperCloud, and the National Energy Research Scientific Computing.

For healthy hearing, timing matters

When sound waves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Neurons send signals by emitting spikes — brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming sound waves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute for Brain Research have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The open-access findings, reported Dec. 4 in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. MIT professor and McGovern investigator Josh McDermott, who led the research, explains that his team’s models better-equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Sound waves oscillate at rates that determine their pitch: Low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that correspond to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also associate head of the MIT Department of Brain and Cognitive Sciences.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally; animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler PhD ’24 turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well — better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways — it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”

Q&A: The climate impact of generative AI

Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.

Q: What trends are you seeing in terms of how generative AI is being used in computing?

A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.

We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.

Q: What strategies is the LLSC using to mitigate this climate impact?

A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.

As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.

Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.

We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.

Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?

A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.

In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.

By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!

Q: What can we do as consumers of generative AI to help mitigate its climate impact?

A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.

We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.

There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.

Q: What do you see for the future?

A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.

If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

Video thumbnail

Play video

Video: MIT Lincoln Laboratory

Physicists measure quantum geometry for the first time

MIT physicists and colleagues have for the first time measured the geometry, or shape, of electrons in solids at the quantum level. Scientists have long known how to measure the energies and velocities of electrons in crystalline materials, but until now, those systems’ quantum geometry could only be inferred theoretically, or sometimes not at all.

The work, reported in the Nov. 25 issue of Nature Physics, “opens new avenues for understanding and manipulating the quantum properties of materials,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work.

“We’ve essentially developed a blueprint for obtaining some completely new information that couldn’t be obtained before,” says Comin, who is also affiliated with MIT’s Materials Research Laboratory and the Research Laboratory of Electronics.

The work could be applied to “any kind of quantum material, not just the one we worked with,” says Mingu Kang PhD ’23, first author of the Nature Physics paper who conducted the work as an MIT graduate student and who is now a Kavli Postdoctoral Fellow at Cornell University’s Laboratory of Atomic and Solid State Physics. 

Kang was also invited to write an accompanying research briefing on the work, including its implications, for the Nov. 25 issue of Nature Physics.

A weird world

In the weird world of quantum physics, an electron can be described as both a point in space and a wave-like shape. At the heart of the current work is a fundamental object known as a wave function that describes the latter. “You can think of it like a surface in a three-dimensional space,” says Comin.

There are different types of wave functions, ranging from the simple to the complex. Think of a ball. That is analogous to a simple, or trivial, wave function. Now picture a Mobius strip, the kind of structure explored by M.C. Escher in his art. That’s analogous to a complex, or nontrivial, wave function. And the quantum world is filled with materials composed of the latter.

But until now, the quantum geometry of wave functions could only be inferred theoretically, or sometimes not at all. And the property is becoming more and more important as physicists find more and more quantum materials with potential applications in everything from quantum computers to advanced electronic and magnetic devices.

The MIT team solved the problem using a technique called angle-resolved photoemission spectroscopy, or ARPES. Comin, Kang, and some of the same colleagues had used the technique in other research. For example, in 2022 they reported discovering the “secret sauce” behind exotic properties of a new quantum material known as a kagome metal. That work, too, appeared in Nature Physics. In the current work, the team adapted ARPES to measure the quantum geometry of a kagome metal.

Close collaborations

Kang stresses that the new ability to measure the quantum geometry of materials “comes from the close cooperation between theorists and experimentalists.”

The Covid-19 pandemic, too, had an impact. Kang, who is from South Korea, was based in that country during the pandemic. “That facilitated a collaboration with theorists in South Korea,” says Kang, an experimentalist.

The pandemic also led to an unusual opportunity for Comin. He traveled to Italy to help run the ARPES experiments at the Italian Light Source Elettra, a national laboratory. The lab was closed during the pandemic, but was starting to reopen when Comin arrived. He found himself alone, however, when Kang tested positive for Covid and couldn’t join him. So he inadvertently ran the experiments himself with the support of local scientists. “As a professor, I lead projects, but students and postdocs actually carry out the work. So this is basically the last study where I actually contributed to the experiments themselves,” he says with a smile.

In addition to Kang and Comin, additional authors of the Nature Physics paper are Sunje Kim of Seoul National University (Kim is a co-first author with Kang); Paul M. Neves, a graduate student in the MIT Department of Physics; Linda Ye of Stanford University; Junseo Jung of Seoul National University; Denny Puntel of the University of Trieste; Federico Mazzola of Consiglio Nazionale delle Ricerche and Ca’ Foscari University of Venice; Shiang Fang of Google DeepMind; Chris Jozwiak, Aaron Bostwick, and Eli Rotenberg of Lawrence Berkeley National Laboratory; Jun Fuji and Ivana Vobornik of Consiglio Nazionale delle Ricerche; Jae-Hoon Park of Max Planck POSTECH/Korea Research Initiative and Pohang University of Science and Technology; Joseph G. Checkelsky, associate professor of physics at MIT; and Bohm-Jung Yang of Seoul National University, who co-led the research project with Comin.

This work was funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation, the Gordon and Betty Moore Foundation, the National Research Foundation of Korea, the Samsung Science and Technology Foundation, the U.S. Army Research Office, the U.S. Department of Energy Office of Science, the Heising-Simons Physics Research Fellow Program, the Tsinghua Education Foundation, the NFFA-MUR Italy Progetti Internazionali facility, the Samsung Foundation of Culture, and the Kavli Institute at Cornell.

How AI Changed My Web Development Workflow – Speckyboy

We’re still in the early days of artificial intelligence’s (AI) introduction into web development. The technology initially seemed like a gimmick (or a death sentence, depending on your point of view).

I have wrestled with what AI means for our industry during this time. There’s potential to make our jobs easier. However, there are also questions about where these large language models (LLM) get their data and how accurate they are. Not to mention the eerie feeling of replacing humans.

Is the glass half empty or half full? Perhaps we won’t know for a few more years. New tools and technical evolutions are sure to come.

One thing I can say is that AI has changed my workflow. And it’s about more than saving time. Tools like ChatGPT and GitHub’s Copilot have also changed how I approach development projects.

Here’s a look at how AI has impacted me so far. I hope you see a thing or two that resonates.

A Solution to Coding Challenges

I’ll admit it: I’m not the world’s best coder. My experience with PHP and JavaScript is riddled with fits and starts. I tend to make some progress – only to run into a debugging nightmare. It’s those little things that get in the way, you know?

AI tools have, for better or worse, saved me from scouring Google for solutions. I can paste the offending code snippet into ChatGPT, and it will explain where I went wrong. It will even provide a corrected version.

Issues such as syntax errors aren’t easy to spot with the naked eye. Code editors can point them out but don’t often provide easy-to-understand suggestions. AI acts like an extremely patient friend who holds your hand during these tough times.

I feel guilty for contributing to the decline of traffic to Stack Overflow and similar communities. On the other hand, finding quick answers has lowered my stress levels.

This feature is invaluable for freelancers who work solo. It’s like having a colleague on hand, 24/7. And even when the answers aren’t 100% accurate, they usually lead you in the right direction.

Tools like GitHub Copilot are adept at troubleshooting code.

A More Holistic Approach to Development

Time is always of the essence when doing custom development work. Budgets are limited, and deadlines are short. Sometimes, this leads to a result that isn’t as good as it could be.

There’s also a tie-in with limited coding skills. It’s hard to build a top-notch feature when you spend most of your time debugging. I find myself rushing to get things done rather than polishing them.

AI has made a huge impact in this area. Its ability to quickly write code and solve problems allows me to approach projects differently. I can take a holistic view that encompasses how things should look and work.

The experience has been freeing. My focus has changed to implementing and then improving my code. There’s room to add details that make for a better user experience. I also have more time for quality control.

If anything, AI has brought some clarity to my usually-scattered mind. Working step-by-step through a project is easier than ever.

AI can increase efficiency, allowing you more time to improve your development projects.

A Newfound Confidence in What’s Possible

Custom development has always scared me. I often doubted something was possible until I made it work. Funny as it sounds, having AI’s help at my disposal has improved my confidence. I no longer cringe as much when a client asks for something I haven’t done before.

That doesn’t mean the build process will be worry-free. There will still be bumps in the road and unexpected challenges. However, combining AI with my existing knowledge empowers me to determine the best way forward.

I think this is another side effect of working solo. Each task feels like climbing a mountain when doing it alone. Having access to the right resources helps to ease the burden.

To be clear, I don’t equate AI with a human developer. People are much better at things like planning and critical thinking.

Machines can’t yet compete in that realm. They provide an extra depth of knowledge and boost productivity, though. That makes a big difference when you’re used to doing it all yourself.

It makes me feel like I can take on projects I might have turned down before.

ChatGPT serves as an interactive code reference.

AI Provides a Helping Hand When I Need It

Believe it or not, I don’t use AI every day. Most of my web design and development work originates from my imperfect brain. As does 100% of my writing. I’m happy to keep it that way.

However, I have experienced numerous occasions where an extra bit of help is appreciated. That led to a lot of frustration in the past. Having neither the time nor money to hire an expert for small tasks caused a lot of headaches.

That’s where AI has been most useful. Writing a WordPress function or manipulating the DOM with JavaScript can be accomplished with minimal effort. They’re no longer sticking points that hold me back.

In the end, that’s what tools are supposed to do. We use them to do more in less time. The tools may change, but the concept remains the same. Maybe AI is just the latest in a long line of them.

Related Topics


Top

X-ray flashes from a nearby supermassive black hole accelerate mysteriously

One supermassive black hole has kept astronomers glued to their scopes for the last several years. First came a surprise disappearance, and now, a precarious spinning act.

The black hole in question is 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy that is 270 million light-years away. In 2018, astronomers at MIT and elsewhere observed that the black hole’s corona — a cloud of whirling, white-hot plasma — suddenly disappeared, before reassembling months later. The brief though dramatic shut-off was a first in black hole astronomy.

Members of the MIT team have now caught the same black hole exhibiting more unprecedented behavior.

The astronomers have detected flashes of X-rays coming from the black hole at a steadily increasing clip. Over a period of two years, the flashes, at millihertz frequencies, increased from every 18 minutes to every seven minutes. This dramatic speed-up in X-rays has not been seen from a black hole until now.

The researchers explored a number of scenarios for what might explain the flashes. They believe the most likely culprit is a spinning white dwarf — an extremely compact core of a dead star that is orbiting around the black hole and getting precariously closer to its event horizon, the boundary beyond which nothing can escape the black hole’s gravitational pull. If this is the case, the white dwarf must be pulling off an impressive balancing act, as it could be coming right up to the black hole’s edge without actually falling in.

“This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who co-led the discovery. “This tells us that objects like white dwarfs may be able to live very close to an event horizon for a relatively extended period of time.”

The researchers present their findings today at the 245th meeting of the American Astronomical Society and will publish the results in a forthcoming paper in Nature.

If a white dwarf is at the root of the black hole’s mysterious flashing, it would also give off gravitational waves, in a range that would be detectable by next-generation observatories such as NASA’s Laser Interferometer Space Antenna (LISA).

These new detectors are designed to detect oscillations on the scale of minutes, so this black hole system is in that sweet spot,” says co-author Erin Kara, associate professor of physics at MIT.

The study’s other co-authors include MIT Kavli members Christos Panagiotou, Joheen Chakraborty, Kevin Burdge, Riccardo Arcodia, Ronald Remillard, and Jingyi Wang, along with collaborators from multiple other institutions.

Nothing normal

Kara and Masterson were part of the team that observed 1ES 1927+654 in 2018, as the black hole’s corona went dark, then slowly rebuilt itself over time. For a while, the newly reformed corona — a cloud of highly energetic plasma and X-rays — was the brightest X-ray-emitting object in the sky.

“It was still extremely bright, though it wasn’t doing anything new for a couple years and was kind of gurgling along. But we felt we had to keep monitoring it because it was so beautiful,” Kara says. “Then we noticed something that has never really been seen before.”

In 2022, the team looked through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures X-ray emissions from black holes, neutron stars, galactic clusters, and other extreme cosmic sources. They noticed that X-rays from the black hole appeared to pulse with increasing frequency. Such “quasi-periodic oscillations” have only been observed in a handful of other supermassive black holes, where X-ray flashes appear with regular frequency.

Against a blue background, yellow-pinkish blobs emerge as the date changes in the corner from June 2023 to May 2024. The blobs are larger than 1 light year.

Radio images of 1ES 1927+654 reveal emerging structures that appear to be jets of plasma erupting from both sides of the galaxy’s central black hole following a strong radio flare. The first image, taken in June 2023, shows no sign of the jet, likely because hot gas screened it from view. Then, starting in February 2024, the features emerge and expand away from the galaxy’s center, covering a total distance of about half a light-year as measured from the center of each structure.

Credit: NRAO/Meyer at al. 2025


In the case of 1ES 1927+654, the flickering seemed to steadily ramp up, from every 18 minutes to every seven minutes over the span of two years.

“We’ve never seen this dramatic variability in the rate at which it’s flashing,” Masterson says. “This looked absolutely nothing like a normal supermassive black hole.”

The fact that the flashing was detected in the X-ray band points to the strong possibility that the source is somewhere very close to the black hole. The innermost regions of a black hole are extremely high-energy environments, where X-rays are produced by fast-moving, hot plasma. X-rays are less likely to be seen at farther distances, where gas can circle more slowly in an accretion disk. The cooler environment of the disk can emit optical and ultraviolet light, but rarely gives off X-rays.

Seeing something in the X-rays is already telling you you’re pretty close to the black hole,” Kara says. “When you see variability on the timescale of minutes, that’s close to the event horizon, and the first thing your mind goes to is circular motion, and whether something could be orbiting around the black hole.”

X-ray kick-up

Whatever was producing the X-ray flashes was doing so at an extremely close distance from the black hole, which the researchers estimate to be within a few million miles of the event horizon.

Masterson and Kara explored models for various astrophysical phenomena that could explain the X-ray patterns that they observed, including a possibility relating to the black hole’s corona.

“One idea is that this corona is oscillating, maybe blobbing back and forth, and if it starts to shrink, those oscillations get faster as the scales get smaller,” Masterson says. “But we’re in the very early stages of understanding coronal oscillations.”

Another promising scenario, and one that scientists have a better grasp on in terms of the physics involved, has to do with a daredevil of a white dwarf. According to their modeling, the researchers estimate the white dwarf could have been about one-tenth the mass of the sun. In contrast, the supermassive black hole itself is on the order of 1 million solar masses.

When any object gets this close to a supermassive black hole, gravitational waves are expected to be emitted, dragging the object closer to the black hole. As it circles closer, the white dwarf moves at a faster rate, which can explain the increasing frequency of X-ray oscillations that the team observed.

The white dwarf is practically at the precipice of no return and is estimated to be just a few million miles from the event horizon. However, the researchers predict that the star will not fall in. While the black hole’s gravity may pull the white dwarf inward, the star is also shedding part of its outer layer into the black hole. This shedding acts as a small kick-back, such that the white dwarf — an incredibly compact object itself — can resist crossing the black hole’s boundary.

“Because white dwarfs are small and compact, they’re very difficult to shred apart, so they can be very close to a black hole,” Kara says. “If this scenario is correct, this white dwarf is right at the turn around point, and we may see it get further away.”

The team plans to continue observing the system, with existing and future telescopes, to better understand the extreme physics at work in a black hole’s innermost environments. They are particularly excited to study the system once the space-based gravitational-wave detector LISA launches — currently planned for the mid 2030s — as the gravitational waves that the system should give off will be in a sweet spot that LISA can clearly detect.

“The one thing I’ve learned with this source is to never stop looking at it because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.”

Study shows how households can cut energy costs

Many people around the globe are living in energy poverty, meaning they spend at least 8 percent of their annual household income on energy. Addressing this problem is not simple, but an experiment by MIT researchers shows that giving people better data about their energy use, plus some coaching on the subject, can lead them to substantially reduce their consumption and costs.

The experiment, based in Amsterdam, resulted in households cutting their energy expenses in half, on aggregate — a savings big enough to move three-quarters of them out of energy poverty.

“Our energy coaching project as a whole showed a 75 percent success rate at alleviating energy poverty,” says Joseph Llewellyn, a researcher with MIT’s Senseable City Lab and co-author of a newly published paper detailing the experiment’s results.

“Energy poverty afflicts families all over the world. With empirical evidence on which policies work, governments could focus their efforts more effectively,” says Fábio Duarte, associate director of MIT’s Senseable City Lab, and another co-author of the paper.

The paper, “Assessing the impact of energy coaching with smart technology interventions to alleviate energy poverty,” appears today in Nature Scientific Reports.

The authors are Llewellyn, who is also a researcher at the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) and the KTH Royal Institute of Technology in Stockholm; Titus Venverloo, a research fellow at the MIT Senseable City Lab and AMS; Fábio Duarte, who is also a principal researcher MIT’s Senseable City Lab; Carlo Ratti, director of the Senseable City Lab; Cecilia Katzeff; Fredrik Johansson; and Daniel Pargman of the KTH Royal Institute of Technology.

The researchers developed the study after engaging with city officials in Amsterdam. In the Netherlands, about 550,000 households, or 7 percent of the population, are considered to be in energy poverty; in the European Union, that figure is about 50 million. In the U.S., separate research has shown that about three in 10 households report trouble paying energy bills.

To conduct the experiment, the researchers ran two versions of an energy coaching intervention. In one version, 67 households received one report on their energy usage, along with coaching about how to increase energy efficiency. In the other version, 50 households received those things as well as a smart device giving them real-time updates on their energy consumption. (All households also received some modest energy-savings improvements at the outset, such as additional insulation.)

Across the two groups, homes typically reduced monthly consumption of electricity by 33 percent and gas by 42 percent. They lowered their bills by 53 percent, on aggregate, and the percentage of income they spent on energy dropped from 10.1 percent to 5.3 percent.

What were these households doing differently? Some of the biggest behavioral changes included things such as only heating rooms that were in use and unplugging devices not being used. Both of those changes save energy, but their benefits were not always understood by residents before they received energy coaching.

“The range of energy literacy was quite wide from one home to the next,” Llewellyn says. “And when I went somewhere as an energy coach, it was never to moralize about energy use. I never said, ‘Oh, you’re using way too much.’ It was always working on it with the households, depending on what people need for their homes.”

Intriguingly, the homes receiving the small devices that displayed real-time energy data only tended to use them for three or four weeks following a coaching visit. After that, people seemed to lose interest in very frequent monitoring of their energy use. And yet, a few weeks of consulting the devices tended to be long enough to get people to change their habits in a lasting way.

“Our research shows that smart devices need to be accompanied by a close understanding of what drives families to change their behaviors,” Venverloo says.

As the researchers acknowledge, working with consumers to reduce their energy consumption is just one way to help people escape energy poverty. Other “structural” factors that can help include lower energy prices and more energy-efficient buildings.

On the latter note, the current paper has given rise to a new experiment Llewellyn is developing with Amsterdam officials, to examine the benefits of retrofitting residental buildings to lower energy costs. In that case, local policymakers are trying to work out how to fund the retrofitting in such a way that landlords do not simply pass those costs on to tenants.

“We don’t want a household to save money on their energy bills if it also means the rent increases, because then we’ve just displaced expenses from one item to another,” Llewellyn says.

Households can also invest in products like better insulation themselves, for windows or heating components, although for low-income households, finding the money to pay for such things may not be trivial. That is especially the case, Llewellyn suggests, because energy costs can seem “invisible,” and a lower priority, than feeding and clothing a family.

“It’s a big upfront cost for a household that does not have 100 Euros to spend,” Llewellyn says. Compared to paying for other necessities, he notes, “Energy is often the thing that tends to fall last on their list. Energy is always going to be this invisible thing that hides behind the walls, and it’s not easy to change that.” 

Designing tiny filters to solve big problems

For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.

MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.

In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.

“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.

Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.

“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.

Hooked on research

As a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.

“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”

At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.

“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”

After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.

“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.

After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.

“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.

At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.

“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”

Solving global problems

While interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.

“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”

In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.

In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.

Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.

“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”

These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.

“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.”

Study suggests how the brain, with sleep, learns meaningful maps of spaces

On the first day of your vacation in a new city, your explorations expose you to innumerable individual places. While the memories of these spots (like a beautiful garden on a quiet side street) feel immediately indelible, it might be days before you have enough intuition about the neighborhood to direct a newer tourist to that same site and then maybe to the café you discovered nearby. A new study of mice by MIT neuroscientists at The Picower Insitute for Learning and Memory provides new evidence for how the brain forms cohesive cognitive maps of whole spaces and highlights the critical importance of sleep for the process.

Scientists have known for decades that the brain devotes neurons in a region called the hippocampus to remembering specific locations. So-called “place cells” reliably activate when an animal is at the location the neuron is tuned to remember. But more useful than having markers of specific spaces is having a mental model of how they all relate in a continuous overall geography. Though such “cognitive maps” were formally theorized in 1948, neuroscientists have remained unsure of how the brain constructs them. The new study in the December edition of Cell Reports finds that the capability may depend upon subtle but meaningful changes over days in the activity of cells that are only weakly attuned to individual locations, but that increase the robustness and refinement of the hippocampus’s encoding of the whole space. With sleep, the study’s analyses indicate, these “weakly spatial” cells increasingly enrich neural network activity in the hippocampus to link together these places into a cognitive map.

“On Day 1, the brain doesn’t represent the space very well,” says lead author Wei Guo, a research scientist in the lab of senior author Matthew Wilson, the Sherman Fairchild Professor in The Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences. “Neurons represent individual locations, but together they don’t form a map. But on Day 5 they form a map. If you want a map, you need all these neurons to work together in a coordinated ensemble.”

Mice mapping mazes

To conduct the study, Guo and Wilson, along with labmates Jie “Jack” Zhang and Jonathan Newman, introduced mice to simple mazes of varying shapes and let them explore them freely for about 30 minutes a day for several days. Importantly, the mice were not directed to learn anything specific through the offer of any rewards. They just wandered. Previous studies have shown that mice naturally demonstrate “latent learning” of spaces from this kind of unrewarded experience after several days.

To understand how latent learning takes hold, Guo and his colleagues visually monitored hundreds of neurons in the CA1 area of the hippocampus by engineering cells to flash when a buildup of calcium ions made them electrically active. They not only recorded the neurons’ flashes when the mice were actively exploring, but also while they were sleeping. Wilson’s lab has shown that animals “replay” their previous journeys during sleep, essentially refining their memories by dreaming about their experiences.

Analysis of the recordings showed that the activity of the place cells developed immediately and remained strong and unchanged over several days of exploration. But this activity alone wouldn’t explain how latent learning or a cognitive map evolves over several days. So unlike in many other studies where scientists focus solely on the strong and clear activity of place cells, Guo extended his analysis to the more subtle and mysterious activity of cells that were not so strongly spatially tuned. 

Using an emerging technique called “manifold learning” he was able to discern that many of the “weakly spatial” cells gradually correlated their activity not with locations, but with activity patterns among other neurons in the network. As this was happening, Guo’s analyses showed, the network encoded a cognitive map of the maze that increasingly resembled the literal, physical space.

“Although not responding to specific locations like strongly spatial cells, weakly spatial cells specialize in responding to ‘‘mental locations,’’ i.e., specific ensemble firing patterns of other cells,” the study authors wrote. “If a weakly spatial cell’s mental field encompasses two subsets of strongly spatial cells that encode distinct locations, this weakly spatial cell can serve as a bridge between these locations.”

In other words, the activity of the weakly spatial cells likely stitches together the individual locations represented by the place cells into a mental map.

The need for sleep

Studies by Wilson’s lab and many others have shown that memories are consolidated, refined, and processed by neural activity, such as replay, that occurs during sleep and rest. Guo and Wilson’s team therefore sought to test whether sleep was necessary for the contribution of weakly spatial cells to latent learning of cognitive maps.

To do this they let some mice explore a new maze twice during the same day with a three-hour siesta in between. Some of the mice were allowed to sleep but some were not. The ones that did showed a significant refinement of their mental map, but the ones that weren’t allowed to sleep showed no such improvement. Not only did the network encoding of the map improve, but also measures of the tuning of individual cells during showed that sleep helped cells become better attuned both to places and to patterns of network activity, so-called “mental places” or “fields.”

Mental map meaning

The “cognitive maps” the mice encoded over several days were not literal, precise maps of the mazes, Guo notes. Instead they were more like schematics. Their value is that they provide the brain with a topology that can be explored mentally, without having to be in the physical space. For instance, once you’ve formed your cognitive map of the neighborhood around your hotel, you can plan the next morning’s excursion (e.g., you could imagine grabbing a croissant at the bakery you observed a few blocks west and then picture eating it on one of those benches you noticed in the park along the river).

Indeed, Wilson hypothesized that the weakly spatial cells’ activity may be overlaying salient non-spatial information that brings additional meaning to the maps (i.e., the idea of a bakery is not spatial, even if it’s closely linked to a specific location). The study, however, included no landmarks within the mazes and did not test any specific behaviors among the mice. But now that the study has identified that weakly spatial cells contribute meaningfully to mapping, Wilson said future studies can investigate what kind of information they may be incorporating into the animals’ sense of their environments. We seem to intuitively regard the spaces we inhabit as more than just sets of discrete locations.

“In this study we focused on animals behaving naturally and demonstrated that during freely exploratory behavior and subsequent sleep, in the absence of reinforcement, substantial neural plastic changes at the ensemble level still occur,” the authors concluded. “This form of implicit and unsupervised learning constitutes a crucial facet of human learning and intelligence, warranting further in-depth investigations.”

The Freedom Together Foundation, The Picower Institute, and the National Institutes of Health funded the study.