Transformers are Eating Quantum

DeepMind’s AlphaQubit addresses one of the main challenges in quantum computing.

Created Using Midjourney

Next Week in The Sequence:

  • Edge 451: Explores the ideas behind multi-teacher distillation including the MT-BERT paper. It also covers the Portkey framework for LLM guardrailing.

  • The Sequence Chat: We discuss the challenges of interpretability in the era of mega large models.

  • Edge 452: We explore the AI behind one of the most popular apps in the market: NotebookLM.

You can subscribe to The Sequence below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: Transformers are Eating Quantum

Quantum computing is regarded by many as one of the upcoming technological revolutions with the potential to transform scientific exploration and technological advancement. Like many other scientific fields, researchers are wondering what impact AI could have on quantum computing. Could the quantum revolution be powered by AI? Last week, we witnessed an intriguing example supporting this idea.

One of the biggest challenges in quantum computing lies in the inherent noise that plagues quantum processors. To unlock the full potential of quantum computing, effective error correction is paramount. Enter AlphaQubit—a cutting-edge AI system developed through a collaboration between Google DeepMind and Google Quantum AI. This innovation marks a significant leap toward achieving this goal.

At the core of AlphaQubit’s capabilities is its ability to accurately decode quantum errors. The system leverages a recurrent, transformer-based neural network architecture inspired by the successful use of Transformers in large language models (LLMs). AlphaQubit’s training involves a two-stage process: pre-training on simulated data and fine-tuning on experimental samples from Google’s Sycamore quantum processor. This strategy enables AlphaQubit to adapt and learn complex noise patterns directly from data, outperforming human-designed algorithms.

AlphaQubit’s contributions extend beyond accuracy. It can provide confidence levels for its results, enhancing quantum processor performance through more information-rich interfaces. Furthermore, its recurrent structure supports generalization to longer experiments, maintaining high performance well beyond its training data, scaling up to 100,000 rounds. These features, combined with its ability to handle soft readouts and leverage leakage information, establish AlphaQubit as a powerful tool for advancing future quantum systems.

While AlphaQubit represents a landmark achievement in applying machine learning to quantum error correction, challenges remain—particularly in speed and scalability. Overcoming these obstacles will require continued research and refinement of its architecture and training methodologies. Nevertheless, the success of AlphaQubit highlights the immense potential of AI to drive quantum computing forward, bringing us closer to a future where this revolutionary technology addresses humanity’s most complex challenges.

AI is transforming scientific fields across the board, and quantum computing is no exception. AlphaQubit has demonstrated the possibilities. Now, we await what’s next.

🔎 ML Research

AlphaQubit

Researchers from: Google DeepMind and Google Quantum AI published a paper detailing a new AI system that accurately identifies errors inside quantum computers. AlphaQubit, a neural-network based decoder drawing on Transformers, sets a new standard for accuracy when compared with the previous leading decoders and shows promise for use in larger and more advanced quantum computing systems in the future —> Read more.

Evals by Debate

Researchers from: BAAI published a paper exploring a novel way to evaluate LLMs: debate. FlagEval Debate, a multilingual platform that allows large models to compete against each other in debates, provides an in-depth evaluation framework for LLMs that goes beyond traditional static evaluations—> Read more.

OpenScholar

Researchers from: the University of Washington, the Allen Institute for AI, the University of Illinois Urbana-Champaign, Carnegie Mellon University, Meta, the University of North Carolina at Chapel Hill, and Stanford University published a paper detailing a specialized retrieval-augmented language model that answers scientific queries. OpenScholar identifies relevant passages from a datastore of 45 million open-access papers and synthesizes citation-backed responses to the queries —> Read more.

Hymba

This paper from researchers at NVIDIA introduces Hymba, a novel family of small language models. Hymba uses a hybrid architecture that blends transformer attention with state space models (SSMs), and incorporates learnable meta tokens and methods like cross-layer key-value sharing to optimize performance and reduce cache size —> Read more.

Marco-o1

Researchers from the MarcoPolo Team at Alibaba International Digital Commerce present Marco-o1, a large reasoning model built upon OpenAI’s o1 and designed for tackling open-ended, real-world problems. The model integrates techniques like chain-of-thought fine-tuning, Monte Carlo Tree Search, and a reflection mechanism to improve its problem-solving abilities, particularly in scenarios involving complex reasoning and nuanced language translation —> Read more.

RedPajama-v2

Researchers from: Together, EleutherAI, LAION, and Ontocord published a paper detailing the process of creating RedPajama, a dataset for pre-training language models that is fully open and transparent. The RedPajama datasets comprise over 100 trillion tokens and have been used in the training of LLMs such as Snowflake Arctic, Salesforce’s XGen, and AI2’s OLMo —> Read more.

🤖 AI Tech Releases

DeepSeek-R1-Lite-Preview

DeepSeek unveiled its latest model that excel at reaosning capabilities —> Read more.

Judge Arena

Hugging Face released JudgeArena, a platform for benchmarking LLM-as-a-Judge models —> Read more.

Qwen2.5-Turbo

Alibaba unveiled Qwen2.5-Turbo with extended long context capabilities —> Read more.

Tülu 3

AI2 open sourced Tülu 3, a family of intruction following models optimized for post-training capabilities —> Read more.

Pixtral Large

Mistral open sourced Pixtral Large, a 124B multimodal model —> Read more.

Agentforce Testing Center

Salesforce released a new platform for testing AI agents —> Read more.

🛠 Real World AI

Recommendations at Meta

Meta engineering discusses some of the sequence learning technique used in their recommendation systems —> Read more.

📡AI Radar

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.