Microsoft’s New Framework for Multi-Agent Systems

Magentic-One streamlines the implementation of multi-agent systems for solving complex tasks.

Created Using Midjourney

Next Week in The Sequence:

  • Edge 447: We start our series about knowledge distillation. We dive into the different types of distillation, review the paper that introduced the concepts of modern distillation. We also dive into the HayStack framework for RAG applications.

  • The Sequence Chat: We discuss another controversial topic in generative AI.

  • Edge 448: We review Meta AI’s method for developing thinking LLMs.

You can subscribe to The Sequence below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: Magentic-One

Magentic-One: A Multi-Agent System for Complex Tasks

Multi-agent systems are one the most fascinating areas of generative AI. We are barely getting single agents to work so thinking about systems that combine several agents is fundamentally hard. New multi-agent fameworks are emerging everywhere and last week was Microsoft’s turn. After releasing frameworks such as AutoGen or TaskWeaver, Microsoft is dabbling into multi-agent systems.

Magentic-One is a new generalist multi-agent system developed by Microsoft Research for solving open-ended web and file-based tasks across various domains. This system represents a significant step towards developing agents that can complete tasks people encounter in their daily work and personal lives, moving from simple conversations to actual task completion. Imagine AI not only suggesting dinner options but autonomously ordering and arranging delivery or actively conducting research instead of merely summarizing papers – this is the transformative potential of Magentic-One.

At its core, Magentic-One features a multi-agent architecture with a lead agent, the Orchestrator, guiding four specialized agents. The Orchestrator is responsible for planning, tracking progress, recovering from errors, and directing other agents to execute tasks. Think of it as a conductor leading an orchestra; each musician (agent) plays their part (skill) under the conductor’s guidance to achieve a harmonious outcome (task completion). The specialized agents include a WebSurfer, proficient in operating a web browser; a FileSurfer, adept at navigating and reading local files; a Coder, capable of writing and analyzing code; and a ComputerTerminal, providing a console shell for code execution.

This modular approach offers several advantages over traditional monolithic single-agent systems. Firstly, it simplifies development and reuse, akin to object-oriented programming, by encapsulating specific skills within individual agents. Secondly, its plug-and-play design enables easy adaptation and extensibility, allowing agents to be added or removed without affecting other agents or the overall architecture. This flexibility contrasts with the often constrained and inflexible workflows of single-agent systems.

Magentic-One is implemented using AutoGen, Microsoft’s open-source framework for multi-agent applications. While the system typically uses GPT-4o as the default language model for all agents, it is model-agnostic and can incorporate various models to support different capabilities or cost requirements. This allows for customization and optimization based on the specific task at hand. Although demonstrating strong generalist capabilities, Magentic-One is still under development and can make errors. The team is actively working on addressing emerging risks, such as undesirable agent actions and potential malicious use cases, inviting the community to contribute towards the development of safe and helpful agentic systems.

🔎 ML Research

Relationships are Complicated

This paper from Google Research, Google, USA presents a comprehensive taxonomy of relationships between datasets on the Web and maps these relationships to user tasks during the dataset discovery process. The paper highlight methods to identify these relationships, evaluate their performance on a large dataset corpus, and highlight limitations in existing dataset semantic markup for relationship identification —> Read more.

Long Document Understanding

This paper from University at Buffalo and Adobe Research presents LoCAL, a framework for multi-page document understanding that uses LMMs for both question-based evidence page retrieval and answer generation. The paper demonstrate LoCAL’s effectiveness on several benchmarks and introduce a new dataset, LoCAL-bench, specifically designed for document understanding tasks —> Read more.

Hunyuan-Large

This paper presents Hunyuan-Large, an open-sourced Mixture-of-Experts (MoE) based LLM with 389 billion total parameters and 52 billion activated parameters, developed by authors who do not state their affiliation. The paper details the model’s pre-training and post-training stages, highlighting the data synthesis process and training techniques used to achieve its high performance across various benchmarks —> Read more.

AdaCache

This paper from Stonybrook University and Meta AI introduces AdaCache, a training-free inference acceleration mechanism for video diffusion transformers that dynamically allocates computational resources based on the complexity of the input prompt. The authors demonstrate that AdaCache consistently shows better generation quality compared to other acceleration methods at comparable speedups —> Read more.

BitNet

This paper from Microsoft Research and University of Chinese Academy of Sciences presents BitNet, a 1-bit transformer model for cost-efficient LLM inference with weights represented in 1.58-bit (i.e., {-1, 0, 1}). The research shows that BitNet can match full-precision models in performance while being significantly more efficient in terms of latency, memory, and energy consumption —> Read more.

Mixture-of-Transformers

This paper from the Meta FAIR team,proposes Mixture-of-Transformers (MoT), a sparse architecture for multi-modal generation that decouples model parameters across transformer layers based on modality. The paper demonstrates that MoT achieves competitive performance in image and text generation tasks while being more computationally efficient —> Read more.

🤖 AI Tech Releases

Magentic One

Microsoft open sourced Magentic One, a multi-agent framework for web and file tasks —> Read more.

Mistral APIs

Mistral released APIs for batch processing and content moderation.

Ollama Vision Models

Ollama integrated Llama 3.2 vision models.

McBench

A new benchmark for LLM problem solving based on Minecraft.

🛠 Real World AI

Gen AI at Slack

Slack discusses their AI efforts to augment engineering workflows —> Read more.

📡AI Radar

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.