LLMs Aligned! But to What End?
Editor’s note: Sinan Ozdemir is a speaker for ODSC East this April 23–25. Be sure to check out his talk, “Aligning Open-source LLMs Using Reinforcement Learning from Feedback,” there!
Introduction — Re-aligning Our Expectations of AI
Reinforcement Learning (RL) has become one of the primary engines powering AI alignment, the process of fine-tuning an AI model (usually LLMs) to behave according to a certain set of standards and styles. Reinforcement learning provides the unique ability to instill dimensions of human style and ethics outside of the confines of relatively strict next-token prediction. Reinforcement Learning offers us a chance to supplement traditional fine-tuning methods of prompt-response pairs with a system designed to “nudge” the AI in a direction — funnier, more neutral, more diverse, etc.
To instill these kinds of behaviors into an LLM -let’s say we want the AI to be funnier — that would mean you need to go over all of your training data and make sure that examples are “funny enough” to be considered good training data. But who is deciding what is “funny enough”? What if learning humor comes at the cost of the AI’s primary objective (usually answering questions and carrying on conversations)?
My upcoming workshop at ODSC East and this post focus on Reinforcement Learning from Feedback (RLF) which involves giving an AI iterative feedback on solving a task and letting the LLM adapt its own performance in the hopes of having the AI act in a more expected manner and getting better feedback over time. The most common application of this is in training instruction-following AIs, which is exactly what I will be going over.
Get your ODSC East 2024 pass today!
In-Person and Virtual Conference
April 23rd to 25th, 2024
Join us for a deep dive into the latest data science and AI trends, tools, and techniques, from LLMs to data analytics and from machine learning to responsible AI.
Case Study — Teaching a Llama to chat
In a previous post for an ODSC workshop I gave, I showed off results from one of my go-to RLF case studies — fine-tuning a FLAN-T5 model to write more neutral news summaries: https://opendatascience.com/harnessing-llm-alignment-making-ai-more-accessible.
In this post, I want to show off my second meatier go-to case study — making a conversational chatbot from a raw pre-trained LLM. Its name is SAWYER — Sinan’s Attempt at Wise Yet Engaging Responses — because I wanted to make a fun name for my LLM too.
That already sounds like an oxymoron, doesn’t it? — “raw pre-trained” — but what I mean by that is our base model will be Meta’s non-chat-aligned LLama 2 model, meaning this model has no ability to answer a question when it comes to us off the shelf.
Our RLF process can be broken down into three steps:
Supervised Fine-Tuning (SFT)
- Grab Meta’s 7b non-chat model weights: hf.co/meta-llama/Llama-2–7b-hf
- Fine-tune the model with several conversations to learn how to convert embedded knowledge into a productive conversation
Reward Training (RT)
- Get a dataset of scored responses to a conversational reply, indicating which responses humans preferred
- Fine-tune a RoBERTa model to distinguish between preferred and non-preferred responses to a conversation
Reinforcement Learning from Feedback (RLF via PPO)
- Obtain an entirely new set of only prompts with the bot response at the end missing
- Let the LLM reply to a few and use the reward model to assign rewards to the responses — positive is good, negative is bad
- Let the LLM update its parameters, taking into consideration how much reward it got and how far the updated model has deviated from the original weights
The figure below shows the RLF process (the third step) at a very high level:
Figure 1. Our RL loop has SAWYER answering questions, being graded on its performance, and asking it to try again with updated parameters and yes, that image of a llama is AI generated
The workshop will cover the nitty-gritty of how to code all of this but for now, let’s skip to the fun part: the results!
The Results
We will see the full suite of results during our workshop but some notable examples stand out. Let’s ask our three versions of SAWYER — no alignment whatsoever (base LLama 2 7b ), only supervised fine-tuned (SFT), and fine-tuned plus reinforcement learning from feedback (SFT + RLF).
Figure 2. SAWYER learns to answer questions with SFT, but learns to answer them in a more conversational way with RL
We can see a notable difference between all three stages starting with base non-chat LLama 2 which is trying to write a multiple choice question (I guess?) and our final SAWYER model being the chattiest about the actual answer to the question. That is a relatively cherry-picked example but even when we zoom out and test our model against a test set of conversations (done before and after applying RL to our chatbot), plotting for achieved reward scores, our model post-RL is on average getting higher preference rewards with a much lower variance:
Figure 3. We see statistically significant changes in rewards from before (SFT only) and after alignment via RL
This means that the final SAWYER model, post-RL, seems to be getting higher rewards, more consistently.
Reinforcement learning doesn’t help with everything though. For example, we can’t expect a model that received more rewards for answering in a way that humans prefer to be “smarter”. I ran these models against some well-known benchmarks (below I’m showing truthful_qa and mmlu[world_religions]) and they got basically the same accuracy score:
Figure 4. Our SFT and SFT + RLF models perform basically at the same level on tasks where the model needs to respond accurately and style is irrelevant (no chain of thought was applied here, I simply asked the model to answer a question directly)
I wasn’t expecting SAWYER to knock these benchmarks out of the park by any means, I just wanted to show the difference between what RL can and cannot help with. Aligning a model to chat factually and conversationally involves several steps and each step comes with caveats and nuances. Navigating these waters is challenging without step-by-step guidance and that is exactly what I will be providing at my upcoming workshop!
Conclusion
The exploration of Reinforcement Learning from Feedback (RLF) as a means to fine-tune Large Language Models (LLMs) towards specific behavioral goals — such as conversational attitudes, neutrality, or diversity — represents a significant leap forward in our quest to make AI more adaptable and responsive to human needs.
Through case studies like SAWYER, we can see firsthand the potential of SFT and RL to transform a pre-trained model into a more engaging and conversational agent. The process, involving a blend of supervised fine-tuning and reinforcement learning, underscores the complexity of aligning AI with nuanced human qualities. The results, while encouraging, also highlight the inherent limitations of current methodologies. While RL can guide models to interact in more human-like ways, it does not inherently increase their factual accuracy or understanding of the world.
Figure 5. An overview of the three elements of LLM alignment
The journey of aligning AI with human expectations is ongoing. The successes and limitations of using RL from Feedback signal that while we can nudge AI towards more human-like interactions, the end goal — creating AI that truly understands and reflects human values, humor, and ethics — remains a challenging frontier. As we move forward, it is crucial to continue refining our approaches, questioning our objectives, and considering the broader implications of our quest to create AI that is not just aligned, but aligned to what end. The future of AI alignment is promising, yet it demands our thoughtful consideration, creativity, and, most importantly, our unwavering commitment to ethical principles.
For more on:
- Why we are using PPO over DPO
- Evaluating SAWYER’s capabilities
- Tips and techniques I used to fine-tune SAWYER on a single GPU on Colab
- How to properly fine-tune a reward mechanism
- How PPO can help set us up for more longer-term success than DPO can
- Why higher rewards aren’t always a good thing
- SAWYER’s opinions on poetry
And much more, come to our workshop at ODSC East in April! See you there.
About the Author:
Sinan Ozdemir is a mathematician, data scientist, NLP expert, lecturer, and accomplished author. He is currently applying my extensive knowledge and experience in AI and Large Language Models (LLMs) as the founder and CTO of LoopGenius, transforming the way entrepreneurs and startups market their products and services.
Simultaneously, he is providing advisory services in AI and LLMs to Tola Capital, an innovative investment firm. He has also worked as an AI author for Addison Wesley and Pearson, crafting comprehensive resources that help professionals navigate the complex field of AI and LLMs.
Originally posted on OpenDataScience.com
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.