Can I Solve Science?

A brilliant essay by Stephen Wolfram explores this challenging question.

Created Using Ideogram

Next Week in The Sequence:

  • Edge 377: The last issue of our series about LLM reasoning covers reinforced fine-tuning(ReFT), a technique pioneered by ByteDance. We review the ReFT paper and take another look to Microsoft’s Semantic Kernel framework.

  • Edge 378: We review Google’s recent zero-shot time-series forecasting model.

You can Subscribe to The Sequence Below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: Can AI Solve Science?

Discovering new science is considered by many, including myself, as one of the ultimate tests of AGI (Artificial General Intelligence). We are witnessing glimpses of the potential impact of ‘AI for science’ with models such as those discovering new computer science and math algorithms, or the famous AlphaFold, which is actively used for discovering new proteins.

Is AI going to discover everything?

Can AI help explain the universe?

What are the limits of AI when it comes to science?

There are many theories about the possibilities of AI in scientific domains, but not a formal theory. Last week, computer scientist and physicist Stephen Wolfram published a long and detailed essay attempting to explain the potential and limits of AI in discovering new science. Wolfram’s theory relies heavily on one of his favorite theories: the principle of computational irreducibility.

Wolfram introduced the idea of computational irreducibility in his 2002 book, ‘A New Kind of Science’. This theory also relies on the idea that the universe can be modeled using formal computations. Some of these computations are reducible, which means they allow shortcuts to speed them up, while others do not allow for such flexibility and require executing all the computation steps. Science is possible because, even though many phenomena are computationally irreducible, they contain pockets of reducibility in which patterns can be inferred.

What does this have to do with AI? Well, AI is a form of approximating predictions by inferring regularities in data. In that sense, AI can be applied to solve those pockets of reducibility, but what about the rest? Well, easily, the irreducible parts can be processed using a formal computation language like Wolfram Language (coincidentally 😉).

In summary, Wolfram believes that AI can help advance most scientific workflows, but it will always be limited by the nature of computational irreducibility. In other words, this will prevent AI from autonomously solving science. However, combining AI with computational languages opens the doors to all sorts of possibilities when it comes to advancing science.

Whether you agree with Wolfram’s theory or not, you have to admit that it is certainly interesting. AI, by itself, cannot solve all science, but the combination of AI and computational languages could get pretty far.


📌 Exciting news! The speaker lineup for apply() 2024 is now live.

Join industry leaders from LangChain, Meta, and Visa for insights to master AI and ML in production.

Here’s a sneak peek of the agenda:

LangChain Keynote: Hear from Lance Martin, an ML leader at LangChain, a leading orchestration framework for large language models (LLMs).

Explore Semi-Supervised Learning: Aleksandr Timashov, ML Engineer at Meta, dives into practical approaches for training models with limited labeled data.

Deep Dive into Uplift Modeling: Toyosi Bamidele, Data Scientist at Visa, demystifies uplift modeling for estimating marketing interventions’ impact.

Dive deep into these topics with our expert speakers and gain actionable insights for mastering AI and ML. Stay tuned for the full agenda!


🔎 ML Research

Stable Diffusion 3

Stability AI published a paper outlining the technical details behind Stable Diffusion 3. The paper emphasizes on the rectified flow as a method to improve the mapping between noise and data which is essential to diffusion models —> Read more.

Yi

The team from 01.ai published a paper detailing thearchitecture behind the Yi family of models. Yi is based on 6B adn 34B pretrained models and that further fine-tuned for instruction and chat scenarios —> Read more.

Chatbot Arena

AI researchers from the prestigious LMSys lab at UC Berkeley published a paper detailing the popular Chatbot Arena platform. Chatbot Arena is one of the most popular tools for evaluating and benchmarking foundaiton models —> Read more.

Orca-Math

Microsoft Research published a technical report about Orca-Math, a version of Mistral 7B fine-tuned in mathematical problems. The mode achieved a remarkable 86.8% performance in the GSM8k dataset surpassing models such as LLAMA-270B and GPT-3.5 —> Read more.

Human Level Forecasting with LLMs

AI researchers from UC Berkeley published a study evaluating whether LLMs can forecast events at the level of human forecasters. The evaluation relies on a LLM-RAG system that can collect information, generate forecasts and aggregate predictions —> Read more.

AtP

Google DeepMind published a paper proposing Attribution Patching(AtP) a fast gradient descent method for causal attribution of behavior in LLMs. AtP is a form of activation patching which can identify the modes that lead to false positive predictions —> Read more.

🤖 Cool AI Tech Releases

Claude 3

Anthropic released the Claude 3 model family showcasing impressive performance —> Read more.

Inflection 2.5

Inflection AI unveiled the new version of its marquee foundation model which seems to achieve impressive performance across different benchmarks —> Read more.

Einstein 1 Studio

Salesforce released Einstein 1 Studio, a set of low-code tools for customizing Einstein CoPilot —>Read more.

TripoSR

Stability AI released TripoSR, a model that can generate 3D objects from single images —> Read more.

🛠 Real World ML

Can I Solve Science?

Stephen Wolfram published a long and super insightful essay detailing the history, possibilities and challenges of AI when comes to discover new science. The essay builds on Wolfram;s ideas of computation reducibility and outlines a clear bou`ndary about the areas that “AI in science” is applicable and those in which it isn’t —> Read more.

Python Upgrades at Lyft

The Lyft engineering team discusses their processes for upgrading Python at scale —> Read more.

📡AI Radar