Mastering AI Applications: What to Expect from the AI Builders Summit Schedule
The AI Builders Summit is a four-week journey into the cutting-edge advancements in AI, designed to equip participants with practical skills and insights across four pivotal areas: Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), AI Agents, and the art of building comprehensive AI systems.
Kicking off on January 15th, in Week 1, attendees will focus on LLMs, learning how to select, optimize, and fine-tune models for specific tasks, while implementing advanced techniques like memory tuning and speculative decoding. Week 2 dives into RAG, where participants will explore building scalable systems, multimodal AI assistants, and pipelines that transform unstructured data into actionable insights. Week 3 shifts to AI Agents, offering hands-on experience in designing workflows, developing multi-agent systems, and integrating agentic techniques to solve complex, multi-step tasks. Finally, Week 4 ties it all together, guiding participants through the practical builder demos from cloning compound AI architectures to building production-ready applications.
We’ve selected some of the training highlights with more sessions added weekly.
Selected Training Sessions for Week 1 — LLMs (Wed 15 Jan — Thu 16 Jan)
Cracking the Code: How to Choose the Right LLMs Model for Your Project Ivan Lee, CEO and Founder of Datasaur
Selecting the right AI model is a strategic process that requires careful evaluation and optimization to ensure project success. In this hands-on session, attendees will learn practical techniques like model testing across diverse scenarios, prompt engineering, hyperparameter optimization, fine-tuning, and benchmarking models in sandbox environments.
Fine-tune Your Own Open-Source SLMs Devvret Rishi, CEO of Predibase, and Chloe Leung, ML solutions architect at Predibase
Discover how to cost-effectively customize open-source small language models (SLMs) to outperform GPT-4 on various tasks. This hands-on workshop introduces efficient fine-tuning techniques using LoRA eXchange, enabling participants to fine-tune task-specific models for under $8 each. Learn best practices, serve multiple adapters dynamically on a single GPU, and accelerate inference with Turbo LoRA speculative decoding.
Building High Accuracy LLMs using a Mixture of Memory Experts Ryan Compton, Solutions Engineer at Lamini
LLMs are powerful, but their lack of domain-specific expertise can limit their utility for businesses. Fine-tuning bridges this gap, and this workshop introduces an advanced method called Memory Tuning. By combining Low-Rank Adaptation (LoRAs) and Mixture of Experts (MoE) into the Mixture of Memory Experts (MoME) model, Memory Tuning optimizes LLMs for perfect recall of specific facts while maintaining generalization capabilities. Through hands-on exercises, participants will learn to implement MoME at scale, create and evaluate datasets, and iteratively tune models to achieve high accuracy. Using open-source tools like Llama 3.2 and the Lamini platform, attendees will gain practical skills to adapt LLMs for their unique use cases.
Selected Training Sessions for Week 2 — RAG (Wed Jan 22 — Thu Jan 23)
Database Patterns for RAG: Single Collections JP Hwang, Technical Curriculum Developer at Weaviate
Scaling RAG systems requires strategic architectural decisions to balance performance, cost, and maintainability. This workshop delves into the design and implementation of single-collection versus multi-tenant vector database architectures, focusing on their use cases and trade-offs. Participants will explore query patterns, tenant management, index configuration, and optimization techniques. Hands-on exercises, supported by Jupyter Notebooks and Docker, will guide attendees in implementing and managing these architectures effectively. By the session’s end, participants will have actionable knowledge to tailor RAG implementations to their specific requirements, ensuring scalability and efficiency.
Inside Multimodal RAG Suman Debnath, Principal AI/ML Advocate at Amazon Web Services
This session explores the architecture of building a multimodal AI assistant using advanced RAG techniques integrated with LlamaIndex for efficient cross-modal data retrieval. Participants will address challenges like incomplete data, reasoning mismatches, and handling diverse inputs such as text, tables, and images. Through hands-on exercises, attendees will learn to structure and index datasets with LlamaIndex, leverage LangChain-based embeddings, and implement query decomposition and fusion strategies.
From Reviews to Insights: RAG and Structured Generation in Practice Cameron Pfiffer, Developer Relations Engineer at .txt
Extracting structured data from unstructured text is a common challenge, and this workshop demonstrates how to solve it by combining RAG with structured generation. Participants will learn to build a pipeline that uses semantic search to locate relevant reviews and structured generation via the Outlines library to extract data points like categories, issues, and sentiment into well-defined schemas. By the end of the session, attendees will know how to transform unstructured data into reliable, queryable formats, optimize LLM outputs for cost-effective analysis, and apply structured data to answer natural-language queries.
Evaluating Retrieval-Augmented Generation and LLM-as-a-Judge Methodologies
Stefan Webb, Developer Advocate at Zilliz
This workshop provides a comprehensive guide to designing and evaluating optimal RAG systems. Attendees will learn to build a complete RAG evaluation pipeline using open-source tools like LangChain, Milvus, HuggingFace, and RAGAS. Through hands-on modules, participants will explore task-based and introspection-based evaluation strategies, calculate performance metrics, and use advanced techniques such as LLM-as-a-Judge.
Selected Training Sessions for Week 3 — AI Agents (Wed Jan 29 — Thu Jan 30)
LLM Engineering Masterclass: Select and Apply LLMs Using RAG, Fine-tuning, and Agentic AI Edward Donner, co-founder and CTO of Nebula.io
This hands-on workshop empowers participants to tackle real-world business problems using advanced LLMs and build a cutting-edge Agentic AI solution. Attendees will explore leading open-source and closed LLMs, learn to compare models via benchmarks and leaderboards, and implement techniques like multi-shot prompting, RAG, and fine-tuning. The workshop culminates in building multiple AI agents, including a QLoRA fine-tuned LLM, a RAG pipeline with Chroma, and an RSS feed analyzer, using tools like HuggingFace, Gradio, and LangChain.
Modern AI Agents from A-Z: Building Agentic AI to Perform Complex Tasks Sinan Ozdemir, AI & LLM Expert, Author, and Founder + CTO of LoopGenius
This AI Builders session offers a deep dive into creating AI Agents using modern frameworks like CrewAI and LangGraph, with the goal of empowering participants to build systems capable of solving complex, multi-step tasks. Attendees will learn to design Agentic AI workflows that integrate APIs, execute code, generate images, and make advanced decisions. Over two hours, participants will gain hands-on experience with agent development, including setting up environments, creating chain-of-thought prompts, and optimizing agent performance.
Building agentic RAG with LlamaIndex Workflows Laurie Voss, VP of Developer Relations at LlamaIndex
This tutorial focuses on using LlamaIndex, an open-source framework for building agentic RAG applications. Participants will start with the basics of working with LLMs in LlamaIndex, including data loading, parsing, embedding, and storage. They’ll then create a RAG pipeline and build an agent within LlamaIndex to perform information retrieval. Finally, the session will guide attendees through combining agents into full-scale workflows that utilize agentic techniques for reflection and error correction.
Selected Training Sessions for Week 4 — AI Builders (Wed Feb 5 — Thu Feb 6)
AI Agents — A Practical Implementation Valentina Alto, Azure Specialist — Data and Artificial Intelligence at Microsoft
This AI Builders session explores the growing trend of AI Agents in generative AI applications, focusing on the key components that make up these specialized entities: LLMs, prompts, memory, and tools. The session begins with an overview of the evolving landscape of GenAI, covering key trends from RAG to Agents and Multi-Agents. Next, it provides a deep dive into the anatomy of an AI Agent, examining its definition, role, and practical applications, including how LLMs, memory, and tools work together. Finally, participants will build their own AI Agent from scratch using Python and AI orchestrators like LangChain.
Cloning NotebookLM with Open Weights Models Niels Bantilan, Chief ML Engineer at Union.AI
In this workshop, attendees will learn to create a minimal clone of Google’s NotebookLM, a compound AI system that integrates multiple models, such as LLMs and text-to-speech (TTS), to perform tasks like generating audio podcasts from input text documents. The session begins with an introduction to compound AI systems and the high-level architecture of NotebookLM. Participants will then use open-weight models like Llama3 for text generation and Parler-TTS for podcast creation, all within Union Serverless — a platform for building and deploying AI products. Finally, attendees will deploy the application using Gradio on Union, ensuring it is production-ready with GitHub Actions.
AI Builder’s Toolkit — Demo Day — Multiple 15-minute sessions
Each of these 15-minute AI Builders sessions is about rapidly building innovative AI-driven solutions. These hands-on demo workshops and live demonstrations show attendees how to use cutting-edge tools to create impactful AI applications in real-time. It’s not just about mastering tools — it’s about applying them to solve real-world problems and unlock new possibilities in AI.
Participants will dive into building real-world AI applications such as chatbots, AI agents, RAG systems, recommendation engines, and data pipelines. They’ll explore multimodal applications, dashboards, workflow automation, and personalized assistants.
Sign me up!
These are only the confirmed AI Builders sessions, so stay tuned for even more to be added soon. In addition to all of these hands-on workshops, we’ll also have Office Hours where you can talk 1 on 1 with AI gurus.
If you register for the AI Builders summit today, you can save $500 off the full cost and get your pass for only $299!
You can also register for ODSC East to attend the summit for free and get a huge discount on your East pass! You can save up to $2500 by getting your ODSC East pass, AND get this month-long summit for free.