The Transformer Robots are Here, Just a Different Kind

An impressive week in robotic models from both DeepMind and Stanford University and much more…

Created Using DALL-E

Next Week in The Sequence:

  • Edge 259: Our series about LLM reasoning dives into the fascinating tree-of-thoughts technique including the original paper. We also review the : Language Model Evaluation Harness framework for LLM evaluation.

  • Edge 260: We dive into Ghostbuster, Berkerley University model for detecting LLM generated content.

You can subscribe below!

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: The Transformer Robots are Here, Just a Different Kind

Robotics has always been one of the most fertile grounds for adopting artificial intelligence (AI) techniques. With recent advancements in computer vision, language, and audio foundation models, we can expect to see a new generation of robotic applications that dazzle us. However, the challenges of building effective robotic solutions extend beyond AI and require deep mastery of the physics of an environment and incredibly effective coordination of perception and action. Typically, collecting those training datasets requires massive effort, but the advent of foundation models has drastically lowered the entry point.

A few months ago, Google DeepMind unveiled the Robotic Transformer 2 (RT-2) models, which use language and computer vision to translate knowledge into robotic actions. Last week, DeepMind followed this research with three notable additions:

  1. AutoRT: A system that leverages vision-language models to deploy robots in completely new environments with minimal human supervision.

  2. SARA-RT: A method that converts RT-2 into a version that is 10% more accurate and 14% faster.

  3. RT-Trajectory: A video model for learning control policies in physical actions in robotic applications. This method takes a video and overlays a 2D sketch of an action that the robot can follow.

These three methods combine foundation models in image, language, and video to improve robotic applications. Certainly, aspects such as perception and its translation into action using foundation models can accelerate robotics to levels we haven’t seen before. The robo transformers are definitely on their way!


📣 apply() Spring ‘24 Call for Speakers!

The next apply() is set for March 14 and we’re looking for speakers! apply() is the biggest virtual ML conference in the world, and is designed to bring together ML practitioners in one space to share best practices, development patterns, and emerging tooling. 

Has your team built an ML platform? Pushed ML models to production? Have learned valuable lessons on how to organize an ML team or data scientist team? If yes, we want to hear from you – submit your talk today!


🔎 ML Research

Robotics with Foundation Models

Google DeepMind published the research and code behind AutoRT, SARA-RT and RT-Trajectory, three methods that leverage foundation models om robotic scenarios. The three techniques are part of the Robotics Transformer initiative aimed to help robots navigate environments and make quick decisions —> Read more.

Mobile ALOHA

Researchers from Stanford University, a very impressive robotic application for object manipulation. The robot uses imitation learning to master a series of complex tasks following specific demonstrations. What the videos —> Read more.

GPU Split

Microsoft Research published a paper detailing Splitwise, an optimization technique for GPU utilization. Splitwise works by separating the token generation adn prompt computation phases of LLM inference into different machines —> Read more.

LLM Augmented LLMs

Google DeepMind published a super interesting paper introducing Composition of Augmented Language Models(CALM), a method that augments the capabilities of LLMs with other LLMs. Specifically, CALM introduces cross-attention between models so that they can reuse knowledge representations —> Read more.

High Quality Text Embeddings Using Synthetic Data

Microsoft Research published a paper detailing a method for obtaining high quality text embeddings using only synthetic data and LLMs. More impressively, the method seems to require only about a thousand steps instead of billions of data pairs used to pretrain embedding models —> Read more.

OpenVoice

Researchers from decentralized AI platform MyShell published a paper detailing OpenVoice, a voice cloning that only requires a short audio clip as input. OpenVoice enables super granular control over voice characteristics such as accent, rhythm, emotion, intonation and several others —> Read more.

🤖 Cool AI Tech Releases

CrewAI

A new open source framework for orchestrating autonomous agents —> Read more.

📡AI Radar

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.