The release of DeepSeek-R1 will mark a before and after in the evolution of AI.
Next Week in The Sequence:
We dive into the ideas of Speculative RAG including some research in the space. The engineering section dives into the super popular Dify AI framework. The research edition will dive into Large Action Models. In our opinion section we will debate a controversial topic about wheather AI’s will be able to alter their own reward functions.
You can subscribe to The Sequence below:
📝 Editorial: The R1 Moment
In the evolution of any tech trend, there are moments that mark a beginning and an after. Generative AI has had a few of those, like the release of ChatGPT or the launch of Stable Diffusion. A few days ago, we witnessed the latest of these pivotal moments with the open-source release of DeepSeek-R1. The implications of this release are so profound that it will mark a before and after in the evolution of generative AI.
The release of DeepSeek-R1 has sent shockwaves through the AI community, marking a significant milestone in the development of open-source LLMs. This Chinese-developed AI has demonstrated remarkable capabilities, matching or even surpassing the performance of industry giants like OpenAI’s o1 on several key benchmarks. What makes this achievement particularly noteworthy is that DeepSeek-R1 accomplishes these feats at a fraction of the cost, operating at approximately 3% of the expenses associated with its competitors.
By achieving top-tier performance with limited resources, DeepSeek has shattered the notion that only well-funded organizations can lead in AI innovation. This breakthrough opens up new possibilities for building small models capable of advanced reasoning, even with modest budgets. The distillation of reasoning patterns from larger models into smaller ones has proven highly effective, with DeepSeek-R1-Distill-Qwen-7B outperforming much larger models on complex mathematical tasks.
DeepSeek-R1’s innovative approach to model design and training sets it apart from traditional language models. Unlike conventional models that rely on supervised fine-tuning (SFT), DeepSeek-R1 employs a groundbreaking reinforcement learning (RL)-centric training pipeline. This unique methodology fosters advanced reasoning behaviors such as self-verification, reflection, and extended chain-of-thought (CoT) generation without the need for initial SFT. The model’s architecture, based on a Mixture of Experts (MoE) with 671 billion parameters, activates only 37 billion per forward pass, ensuring both computational efficiency and scalability. Additionally, DeepSeek-R1’s support for local execution addresses critical needs in privacy-sensitive industries and edge computing scenarios, offering a practical alternative to cloud-based deployments. These innovations collectively contribute to DeepSeek-R1’s ability to achieve state-of-the-art performance across various benchmarks while maintaining cost-effectiveness and accessibility.
The open-source nature of DeepSeek-R1 represents a significant shift in the AI landscape. Released under the MIT license, the model’s code is freely available to developers worldwide, fostering collaboration and innovation on a global scale. This accessibility empowers smaller organizations and individual developers to leverage cutting-edge AI technology without incurring substantial costs, potentially accelerating the pace of AI advancements across various sectors.
For China’s AI aspirations, the success of DeepSeek-R1 is a powerful statement. It demonstrates that Chinese AI companies can not only compete with but potentially surpass Western counterparts, even in the face of trade restrictions and limited access to advanced hardware. The innovative approach taken by DeepSeek, focusing on algorithmic efficiency to overcome hardware limitations, showcases China’s ability to adapt and excel in challenging circumstances.
As we witness the unfolding impact of DeepSeek-R1, it’s clear that we are entering a new era of AI development characterized by greater accessibility, efficiency, and global collaboration. The model’s success challenges preconceived notions about the resources required for cutting-edge AI research and development, paving the way for a more diverse and inclusive AI ecosystem. While questions remain about the long-term implications and potential limitations of this approach, the R1 Effect undoubtedly marks a turning point in the democratization of AI technology.
🔎 AI Research
DeepSeek-R1
In the paper “DeepSeek-R1: Unleashing the Power of Reasoning via RL” researchers from DeepSeek AI describe their DeepSeek-R1 model, highlighting its performance on various benchmarks including AIME 2024, Codeforces, GPQA Diamond, MATH-500, MMLU, and SWE-bench. The paper details that Reinforcement Learning (RL) is used to develop reasoning capabilities without supervised data, and that post-training methods enhance reasoning accuracy and align with social values.
CUA
In “Computer-Using Agent” OpenAI introduces a new AI model, the Computer-Using Agent (CUA), designed to interact with the digital world through a universal interface1. CUA combines GPT-4o’s vision capabilities with advanced reasoning through reinforcement learning, enabling it to interact with graphical user interfaces (GUIs) just like humans.
Private AI
In the paper “Communication Efficient Secure and Private Multi-Party Deep Learning”, the authors detail an algorithm for n-party training using secure Multi-Party Computation (MPC) primitives, focusing on privacy-accuracy trade-offs and efficiency compared to related work . The paper discusses differential privacy and secure computation techniques .
Learn-by-interact
In the paper “Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in Realistic Environments” from Google and The University of Hong Kong, the authors introduce the “Learn-by-interact” framework, which synthesizes agent data by leveraging interactions between Large Language Models (LLMs) and environments, using self-instruct and backward construction methods to create instruction-trajectory pairs. This framework is designed to enable LLMs to adapt to new environments and handle diverse tasks, with experiments on SWE-bench, WebArena, OSWorld and Spider2-V showing improved performance, especially with the backward construction method.
A Blueprint of Reasoning Models
In the paper “A Blueprint for Reasoning Language Models” researchers from ETH Zurich, Cledar, BASF SE, and Cyfronet AGH present a blueprint for Reasoning Language Models (RLMs), also known as Large Reasoning Models (LRMs), which extends Large Language Models (LLMs) with advanced reasoning mechanisms. It includes a modular framework, diverse reasoning structures and strategies, and Reinforcement Learning (RL) concepts, providing a modular implementation for rapid RLM prototyping and experimentation.
Agent-R
In the paper “Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training”, the authors from Fudan University and ByteDance Seed introduce Agent-R, an iterative self-training framework that allows language agents to reflect and correct their actions in interactive environments, using Monte Carlo Tree Search (MCTS) to generate revision trajectories. This enables agents to dynamically detect and correct errors, emphasizing that training with revision trajectories outperforms the use of expert trajectories.
🤖 AI Tech Releases
Operator
OpenAI launched its agent for web-based workflows.
Perplexity Assistant
Perpkexity unveiled Assistant, an agent for daily tasks.
SmolVLM
HuggingFace open sourced SmolVLM, two of the smallest foundation models ever built.
🛠 Real World AI
Salesforce’s EA Agent
Salesforce discusses the implementation of their enterprise architecture agent.
📡AI Radar
-
The new US administration unveiled Stargate, a new $500 billion initiative to invest in AI infrastructure.
-
Anthropic announced another $1 billion investment, this time from Google.
-
ElevenLabs has raised a new round at $3B valuation.
-
Anthropic’s CEO Dario Amodei, gave a very insightful interview to the Wall Street Journal.
-
Origami, an AI agent platform for sales, raised $2 million.
-
API powerhouse Postman announced a new agentic platform.
-
AI platform for clinical trials have raised $55 million in new funding.
-
Vertice, an AI procurement platform, announced a $50 million round.