LLMOps — The Next Frontier of MLOps

ODSC - Open Data Science
3 min readNov 6, 2023

Recently Sahar Dolev-Blitental, VP of Marketing at Iguazio, joined us for a lightning interview on LLMOps, and the next frontier of MLOps. Over the course of nearly an hour, Saha discussed many facets of the recently emerged field of LLMOps, from the definition of the field to use cases and best practices. Keep reading for key takeaways from the interview, or you can watch the full video here.

What Are LLMOps?

“The rapid pace of [generative AI] and the fact that everyone is talking about it, makes MLOps and LLMOps much more important than ever.”

Large Language Models present their own range of challenges and complexities. As Sahar notes, the scale of LLMs requires more GPUs and presents different risks. There is also a stronger focus on efficiency to offset the increased amount of resources required by LLMs. Nevertheless, Sahar explains, the foundations of MLOps and LLMOps are the same, what separates them is the scale of the models being taken through their lifecycle to deployment.

Use Cases of LLMOps

“Only 2% of the applications today are Gen AI. I mean 90% of the conversation is about Gen AI for sure, but in practice, only about 2% of the applications are Gen AI-driven. So, I think it’s still very early days….”

Although the field is still in its infancy, LLMOps are being used to shepherd generative AI applications into production. During the interview, Sahar explored two use cases: Subject Matter Experts and Call Center Analysis.

Subject Matter Experts are often employed in the fields of healthcare and retail and take the form of chatbots that are experts on a designated topic. For example, you might find them embedded on the website to help customers directly, or in a support role for customer success teams.

In the case of call center analysis, these applications can be used for sentiment analysis to dig deeper into the topic discussed and identify employees who need more support. In both of these cases, these applications are being used to help employees do their jobs better and increase satisfaction.

Best Practices

“The number 1 kind of tip is that you don’t need to build your own LLM.”

The last topic we will touch on is best practices for smaller organizations looking to implement LLMs and for minimizing bias in the models.

For smaller organizations with cost concerns, Sahar recommends looking into existing LLMs, rather than building your own from scratch. Doing so can reduce the cost of training. Secondly, she suggests that you keep the scope for your LLM use case very narrow. This prevents the LLM from wasting resources on work that does not create value.

To avoid bias, Sarah highlights two very important areas. First, data prep is essential. If data is biased, output will be biased. There are several ways to avoid a biased data set:

  • Build a diverse team that represents a wide range of different backgrounds
  • Provide a diverse data set at the start
  • Constant monitoring and a commitment to retraining when bias is found.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.