Edge 392: Meet RAFT: UC Berkeley’s New Method to Improve RAG Patterns in LLMs

The method brings the best of RAG and supervised fine tuning.

Created Using Ideogram

Pretraining Large Language Models (LLMs) on massive text datasets has become the norm. When these LLMs are applied to specific tasks, it’s often necessary to integrate additional information, such as the latest news or specialized knowledge, into the already trained model. This can be achieved either by prompting the model with new data or by fine-tuning it. Yet, the best way to incorporate this new knowledge into the models is still under debate. A recent paper from UC Berkeley proposes RAFT, a new technique to address precisely that issue.

One of the key challenges in enhancing LLMs with new information is figuring out how to adjust these models for use in Retrieval Augmented Generation (RAG) within specialized fields. The main strategies considered are in-context learning through RAG and supervised fine-tuning. RAG allows LLMs to refer to external documents for answers, but this approach doesn’t fully utilize the learning potential in specific domain settings or make use of available documents beforehand. On the other hand, supervised fine-tuning aims to identify broader patterns in the documents, which could lead to better performance in tasks and alignment with user needs. However, this method may not always take advantage of documents during the testing phase or may overlook errors in document retrieval.