In a recent webinar hosted by SuperAnnotate and Databricks, Leo Lindén from SuperAnnotate and Prasad Kona from Databricks shared their expertise on customizing large language models (LLMs) for specific business needs. They explored practical ways to handle data and fine-tune models to enhance their performance.
Why customize LLMs?
The webinar started with a crucial question: Why customize LLMs? While LLMs have huge knowledge, their general approach might not meet the nuanced needs of every business. For example, a standard LLM could fall short on tasks like searching through a company’s internal documents, assisting customers on e-commerce platforms, or accurately using specialized vocabulary from fields like medicine, law, or finance. Tailoring LLMs can lead to smarter, more relevant interactions, increasing their effectiveness in specialized areas.
How to adapt LLMs
There are three main strategies for adapting LLMs:
- Prompt engineering: This method involves writing detailed prompts to direct the LLM's responses. It's straightforward to use but can be tricky to perfect, can fill up the context window, and may not always be consistent.
- Retrieval augmented generation (RAG): RAG enhances an LLM's responses by dynamically pulling in relevant information from a database, reducing the reliance on complex prompt engineering.
- Fine-tuning: LLM fine-tuning is a more detailed approach that involves adjusting the LLM’s parameters to better suit specific datasets or tasks. It requires a training dataset but is particularly effective for crafting highly customized model behaviors.
Building high-quality LLM training data with SuperAnnotate
SuperAnnotate provides a platform that helps you create the datasets you use to train models. We work with many customers in this space, including major foundational model providers like Databricks, Canva, IBM, and Qualcomm.
SuperAnnotate’s platform combines custom dataset creation and QA tooling, which minimizes the time you need to spend on each item in your dataset. This helps you achieve much higher-quality data. For our larger customers, we also offer services where you can outsource the entire dataset creation process to our team, which has extensive experience with big foundational models.
Our platform, called 'Fine-tune,' lets you build these custom datasets. With our 'Explore' tool, you can easily visualize and work through your entire datasets. The platform includes customized pipelines that automate workflows and manual tasks. You can also set up an LLM as a judge or LLM agents to help you build these datasets more quickly. It’s a fully secured platform that we’ve used to assist companies in different fields.
Fine-tuning LLMs with Databricks
Through Databricks' Partner Connect, you can click on the SuperAnnotate logo to set everything up quickly. You can then build custom annotation tasks in our editor, create datasets, and send them back into Databricks’ Mosaic AI for fine-tuning. This then allows you to connect directly to the fine-tuned models for LLM evaluation.
Databricks makes fine-tuning simpler and more accessible. Their platform manages the backend complexities and provides an integrated environment for training, which speeds up the development process for fine-tuning the models to meet precise operational requirements.
How SuperAnnotate helped build enterprise LLMs
We help enterprises and startups that build SFT datasets, RAG, models in the loop with RLHF, LLM red-teaming, as well as providing strategy consulting of how you can get your datasets ready.
Here are some examples of the companies that we’ve worked with:
- Top 4 foundational model providers: We helped leading LLM foundational providers improve quality and reduce costs by 30% for their RLHF operations.
- Databricks: With Databricks, we helped decrease evaluation costs by ten times for one of their big RAG projects and assisted them in acquiring data for their LLMs.
- Twelve Labs: With Twelve Labs, we're building models to generate natural language searches for videos. Using our platform, we’ve scaled operations rapidly, and with our service team, we’re completing projects in half the time it previously took.
- AV 2.0: We assisted a top AV company, AV 2.0, in developing AI models for better autonomous performance, significantly improving the development process.
Key takeaways
The session wrapped up with a Q&A, where Leo and Prasad addressed specific questions about deploying and enhancing LLMs in various business contexts. The main takeaway was clear: while customizing LLMs demands considerable effort in data preparation and model training, the rewards are significant, leading to more precise, dependable, and contextually appropriate AI interactions.
For a complete walkthrough on using SuperAnnotate and Databricks to fine-tune LLMs, watch our webinar.