Four New Major Open Source Foundation Models in a Week

DBRX, Grok 1.5, Samba-CoE and Jamba are all bringing unique innovations to open source generative AI.

Created Using Ideogram

Next Week in The Sequence:

  • Edge 383: Our new series continues with a deep dive into the core capabililties of autonomous agents. We review a very famous paper about agents simulating human behavior and we dive into the Crew AI framework.

  • Edge 384: We dive into Genie, Google DeepMind’s model that can generative interactive games from text!

You can subscribe to The Sequence using the link below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: Four New Major Open Source Foundation Models in a Week

Open source generative AI is experiencing tremendous momentum, and last week was a major example of this with the release of four major foundation models. By open source, we refer to the weights of the models and not the training datasets or processes. At this time, it’s fair to say that the model weights are where most companies draw the line between open source and closed source. Many purists do not consider this true open source, but in a field evolving as rapidly as generative AI, preserving a level of competitive advantage is essential for any company. Let’s just say that the nature of open source is being reimagined for generative AI.

The fast pace of generative AI also makes the open source race even more fascinating. Last week, we witnessed the release of four major open source models, each innovative in its own way:

  1. DBRX: Databricks released DBRX, a new model based on a mixture-of-experts architecture. DBRX contains 16 expert sub-models and dynamically selects the four most relevant for each token.

  2. Grok 1.5: Elon Musk’s X.ai open-sourced Grok 1.5. The new release boasts a 128k context window and impressive reasoning capabilities.

  3. Samba-CoE 0.2: Samba Nova announced Samba-CoE v0.2, which shows impressive performance at 330 tokens per second. The model claims to outperform BRRX, Mistral, and Grok.

  4. Jamba: AI21 Labs open-sourced Jamba, which combines transformers with the increasingly popular structured state space model (SSM) architecture that powers models like Mamba. The SSM architecture gives Jamba very strong context length capabilities, which is evident in the initial benchmarks.

Regardless of where you fall in the commercial vs. open source debate in generative AI, it is undeniable that the latter will play a major role in the mainstream adoption of this technology. This week shows how strong the momentum in open source generative AI is.”


With just one week left until apply() ‘24, the premier virtual conference for engineers mastering AI and ML, we wanted to remind you to secure your spot before it’s too late!

Date: Wednesday, April 3 / 9:00AM – 5:00PM PT / Virtual

At apply(), our goal is to provide you with the tools and insights you need to conquer AI and ML challenges at production scale. With speakers from LangChain, Meta, Pinterest, Vanguard, Visa, Samsung, NextDoor, and many more in the lineup, this year’s event promises to be our best yet. Be sure to join live for the chance to win swag or a giveaway prize!


🔎 ML Research

Can LLMs Explore?

Researchers from Microsoft and Carnegie Mellon University published a paper exploring the intriguing thesis of LLM’s ability to engage in exploration, an ability typically reserved for reinforcement learning models. The research describes environments such as multi-armed bandits in prompts and determine whether LLMs can explore the environment in order to take actions —> Read more.

Tnt-LLM

Microsoft Research published a paper introducing Tnt-LLM, an LLM framework that generates and predict task labels with minimum user involvement. Tnt-LLM is actively used to discover Microsoft CoPilot’s user’s intent —> Read more.

AutoBNN

Google Research published and research adn open sourced AutoBNN, a JAX framework for interpretable time series forecasting models. AutoBNN’s core idea is to combine the interpretability of traditional time series models with the scalability of neural networks in a single architecture —> Read more.

SaLEM

Amazon Science published a paper introducing SaLEM (for salient-layers editing model), a method for editing layers in an LLM. SaLEM’s key contribution is that it can actually select the layers to be edited automatically —> Read more.

SAFE

Google DeepMind published a paper presenting Search-Augmented Factuality Evaluator (SAFE), a method for factual evaluation in LLMs using synthetic data. SAFE breaks down a long LLM response into specific facts and evaluates its individual accuracy —> Read more.

🤖 Cool AI Tech Releases

DBRX

Databricks released DBRX, a new state-of-the-art open source LLM —> Read more.

Jamba

AI21 Labs open sourced Jamba, a new model that augments Structured State Space model (SSM) with elements of the transformer architecture —> Read more.

Samba CoE v0.2

Samba Nova previewed the performance of Samba CoE v0.2, a new version of Samba-1 which scored incredibly high across many benchmarks —> Read more.

Grok 1.5

X.ai released Grok 1.5 with improved content reasoning capabilities and larger content length —> Read more.

Voice Engine

OpenAI published some details about Voice Engine, a new model for creating custom voices —> Read more.

🛠 Real World ML

Video Content Moderation at Yelp

Yelp discusses the ML architecture powering its video content moderation solution —> Read more.

📡AI Radar

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.