Google Pledges to Fix Gemini Calling Responses “Unacceptable”

ODSC - Open Data Science
2 min readFeb 29, 2024

It’s been a rough week for Google as after the launch of Gemini, users found major issues with the large language model. In many cases, several hallucinations in both text and image responses showed clear bias and historical inaccuracies.

According to Reuters, CEO Sundar Pichai told employees in a note that the model was producing “biased” and “completely unacceptable” responses. On social media, particularly on platforms such as X, users mocked the model’s widely inaccurate responses.

In short, the CEO said that the tool’s responses had offended users and showed a clear bias. Pichia continued in the note, “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement in a wide range of prompts… And we’ll review what happened and make sure we fix it at scale.”

EVENT — ODSC East 2024

In-Person and Virtual Conference

April 23rd to 25th, 2024

Join us for a deep dive into the latest data science and AI trends, tools, and techniques, from LLMs to data analytics and from machine learning to responsible AI.

REGISTER NOW

Due to the required fixes, Google is planning on relaunching Gemini AI in the next few weeks. Currently, many functions, such as image generation were disabled as generated content was being shared online and seen as inappropriate for a model that claims no bias.

This is a clear blow for the company that has been racing over the last year to play catch up with OpenAI’s chatbot, ChatGPT. In some ways, Google has been successful in other AI integrations, such as integrations in its Google suite of products, Gmail, Docs, and Sheets.

But Gemini’s failure this week is a pretty nasty blow to the tech giant. It’s clear that the company likely needs to revisit redlining and other bias monitoring SOPs in-house as the model found it difficult to produce historically arcuate images.

Though images were only one issue. The model was also found to have produced offensive texts to users who asked basic questions about current and past figures. All in all, this debacle from Google is a clear signal to other AI firms the importance of redlining teams, and engineer-focused tests to ensure model behavior runs as expected.

If you’re interested in responsible AI, ODSC East 2024 has an entire track dedicated to providing practical and ethical frameworks for developing AI technology.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.