Can’t-Miss Sessions Announced for the Free Generative AI Summit on July 20
Our first-ever Generative AI Summit is this week. We couldn’t be more excited to bring together a diverse group of experts, academics, industry leaders, and more to discuss this watershed technology. Check out a few of the sessions that you could attend during the summit.
Recent Advances in Diffusion Generative Models
Stefano Ermon PhD | Asst Professor @ Stanford University
This session will present an alternative base for generative models: the vector field of gradients of the data distribution. This framework allows flexible architectures, and requires no sampling during training or the use of adversarial training methods. Additionally, score-based diffusion generative models enable exact likelihood evaluation through connections with neural ODEs, achieving state-of-the-art sample quality and excellent likelihoods on image datasets.
On Brains, Waves, and Representations
Max Welling PhD | Distinguished Scientist @ Microsoft Research
This talk will explore how to build meaningful inductive biases into models for spatiotemporal data domains, such as video. The method addressed will generalize the idea of equivariance to a much looser and learnable constraint, and then add a prior that latent variable representations should evolve as PDEs and in particular waves. All in all, we argue that this brain-inspired inductive bias might help learning of sequence data.
Generative Adversarial Networks 101
Daniel Voigt Godoy | Data Scientist And Author
This session will teach you the basics of Generative Adversarial Networks, the famous GANs, from the ground up: autoencoders, latent spaces, generators, discriminators, GANs, DCGANs, WGANs, and more. The main goal of this session is to show you how GANs work and will address latent spaces and how to use them to generate synthetic data while discussing implementation and training details, such as Wasserstein distance and gradient penalty.
Generative Large Language Models and Hallucinations
Chandra Khatri | Co-Founder @ Got It AI
This session will address one of the significant challenges faced by Generative Large Language Models (LLMs): their tendency to ‘hallucinate’ confidently. This hallucination problem can cause the models to produce inaccurate information. This talk aims to delve into the intricacies of the hallucination problem in LLMs and shed light on effective strategies to overcome it.
BloombergGPT: A Large Language Model for Finance
Ozan Irsoy | Research Scientist @ Bloomberg LP
This session will examine BloombergGPT, a 50-billion parameter language model that is trained on a wide range of financial data. From data collection to evaluation, you’ll have the opportunity to learn about the process of building this LLM for finance.
Pretrain Vision and Language Foundation Models on AWS
Emily Webber | Principal ML Solutions Architect @ AWS
In this session, you’ll dive into the topic of foundation models with a focus on both beneficial and challenging aspects of this technology today. In particular, you’ll explore technologies available on AWS that help you pre-train the foundation models of the future. From distributed training to custom accelerators, reward modeling to reinforcement learning, learn how to create your own state-of-the-art models.
Matching Identities Using Large Language Models
Catherine Havasi | Chief Of Innovation @ Babel Street
Many applications in the real world depend on the capacity to efficiently search through databases of personal or corporate names and understand who is likely the same entity. A single individual can be referred to in a wide variety of ways with name variants, different scripts, nicknames, or aliases. This session will introduce a new method for name matching using a large language model that works on the byte level, which we fine-tuned to embed personal names in a vector space for name retrieval tasks.
Machines vs. Minds: Navigating the Future of Generative AI
Maya Ackerman | CEO and Co-Founder @ WaveAI
This session will explore the essence of generative AI, drawing on comparisons to our own brains. The discussion will delve into the unique strengths of humans and machines, and explore the potential for effective collaboration between us and AI systems.
Sign me up!
Join any one of these sessions, and many others, on July 20th at the free, virtual Generative AI Summit. You can check out all of our speakers and sessions here.
And don’t miss out on our upcoming conference ODSC West 2023 for training sessions, workshops, and more on generative AI and LLMs.
Originally posted on OpenDataScience.com
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.