Highlighting More Technical AI Training Sessions for ODSC Europe 2024
ODSC Europe 2024 is coming up in a little more than a month on September 5th and 6th. We’re adding new sessions every day to the schedule. Here’s a selection of a few more sessions coming to ODSC Europe, featuring topics like optimizing LLMs, evaluating gen AI, and data visualization.
How to Make LLMs Fit into Commodity Hardware Again: A Practical Guide
Oliver Zeigermann | Machine Learning Engineer | Techniker Krankenkasse
Christian Hidber, PhD | Consultant | bSquare
In this hands-on workshop, we will show different approaches on how to make powerful LLMs fit onto affordable GPUs (like a T4) or — in special cases — even make them run on a CPU. We will round this up by showing you how to evaluate and compare the performance of these small LLMs.
Safety Evaluation of Generative AI
Laura Weidinger | Senior Research Scientist | Google Deepmind
How do we know when an AI system is “safe”? Several ethical and social risks from generative AI have been observed in real-world applications, or are anticipated as these systems become more capable. In response, the research community and the public are building approaches to measuring and evaluating these risks. In this talk, Laura will canvass the current state of safety evaluation of generative AI, and key gaps, and propose a way forward.
LLM Application from Inception to Production
Leonardo De Marchi | VP of Labs | Thomson Reuters
This workshop is designed to explore how artificial intelligence can be used to generate creative outputs and to inspire technical audiences to use their skills in new and creative ways. The workshop will also include a series of coding exercises designed to give participants hands-on experience working with AI models to generate creative outputs.
Decoding LLMs: Evaluation is all you need!
Jayeeta Putatunda | Senior Data Scientist, NLP & Gen AI Manager | Fitch Ratings
One of the primary challenges in LLM evaluation lies in the absence of standardized benchmarks that comprehensively capture the capabilities of these models across diverse tasks and domains. Secondly, the black-box nature of LLMs poses significant challenges in understanding their decision-making processes and identifying biases. In this talk, we address the fundamental questions such as what constitutes effective evaluation metrics in the context of LLMs, and how these metrics align with real-world applications.
Multi-agent Systems in the Era of LLMs: Progress and Prospects
Michael Wooldridge, PhD | Professor of Computer Science | University of Oxford
The original metaphor for the field of multi-agent systems was that of a team of experts, each with distinct expertise, cooperating to solve a problem that was beyond the capabilities of any individual expert. “Cooperative distributed problem solving”, as it was originally called, eventually broadened to consider all issues that arise when multiple AI systems interact. The emergence and dramatic success of Large Language Models (LLMs) has given new life to the old dream. A raft of LLM-powered agent frameworks have become available, and multi-agent LLMs are increasingly being adopted. In this talk, we’ll survey the main approaches, opportunities, and outstanding challenges for multi-agent systems in the new world of LLM-based AI.
Cal Al-Dhubaib | Head of AI and Data Science | Further
With a focus on business and technical leaders responsible for bringing AI solutions to life, we will draw from best practices in designing and deploying AI solutions across mission-critical sectors such as healthcare, energy, and financial services, where trust is critical.
Participants will walk away with some practical tools to lead their organizations in developing and deploying AI solutions that are not only technically sound but also widely trusted.
AI Development Lifecycle: Learnings of What Changed with LLMs
Noé Achache | Engineering Manager & Generative AI Lead | Sicara
In this talk, we will explore the lessons learned from building products that are typical use cases of these technologies, such as financial document analysis automation and a RAG (retrieval augmented generation) tool for a medical company.
We will mostly focus on the essential steps of the development workflow which are often overlooked: dataset collection, evaluation, and monitoring. New tools for monitoring and doing manual/automatic evaluation (LLM as a judge) have been released to ease the implementation of these best practices in the context of LLMs, and help product experts assist the technical team in building these products.
Develop LLM Powered Applications with LangChain and LangGraph
Eden Marco | LLM Specialist | Google
In this workshop, we will dive deep into the advanced capabilities of LangGraph, exploring its integration with LangChain to create robust, efficient, and versatile LLM solutions. Our agenda includes a comprehensive introduction to key components such as LCEL, multi-agents, reflection agents, Reflexion agents, and more. Participants will also get hands-on experience with advanced RAG architectures.
Reproducibility FTW: Collaboratively Solve a hard LLM problem, live!
Thomas Capelle | Machine Learning Engineer | Weights & Biases
Unleash the full potential of your machine-learning models and LLMs! We will explore how to do reproducible research and leverage tools like Weights & Biases to achieve this. We will demo some cool fine-tuning projects, the metrics, and advanced features you can instrument to get the most out of your compute. For the hands-on part, we will tackle a thrilling “words” problem collaboratively while introducing our new LLM tracing tool: Weave. We will work together on improving a baseline solution, and discovering a seamless way to innovate and collaborate!
A Practical Introduction to Data Visualization for Data Scientists
Robert Kosara | Data Visualization Developer | Observable
How does data visualization work, and what can it do for you? In this workshop, data visualization researcher and developer Robert Kosara will teach you the basics of how and why to visualize data, and show you how to create interactive charts using open-source tools.
We’ll build all these visualizations using the open-source Observable Plot framework, but the concepts apply similarly to many others (such as ggplot, vega-lite, etc.). To follow along, you’ll need a computer with an editor (such as Visual Studio Code) as well as a download of the project we provide (see the prerequisites).
Sign me up!
That’s just ten out of dozens of more sessions! We’ll be adding more sessions as we get closer to the conference, so be sure to keep an eye out. And remember to get your pass soon. Our limited-time offer of 60% off any ODSC Europe in-person or virtual pass won’t last forever.