Skip to content

Google Addresses Backlash Over Gemini AI's Inaccurate Imagery

Google addresses controversy over inaccurate imagery produced by Gemini AI with updates aimed at improving functionality and addressing concerns over biased historical depictions.

Google responds to criticism over inaccuracies in Gemini AI's imagery, introducing updates to the model while acknowledging concerns over biased historical depictions.

Enhancements to Gemini AI

Amidst controversy surrounding inaccurate imagery generated by Gemini AI, Google rolled out two updates to the model. The first upgrade, applied to Gemini Advanced, enables users to edit and run Python code snippets directly within the interface. This feature allows for experimentation with Python-based code, aiming to streamline workflow and verify functionality.

Business Plan Updates

Additionally, Google introduced updates to its business and enterprise plans, granting users access to 1.0 Ultra, one of the company's most advanced models. These plans now include enhanced data protection measures to prevent Gemini models from using conversations for training purposes, addressing concerns raised by major companies like Samsung.

Continued Image Concerns

Despite these updates, criticism persists regarding Gemini AI's image output, particularly in historical depictions. Google's developer Jack Krawczyk acknowledged the issue and stated that the team is working on immediate solutions. Social media users have expressed frustration over the inaccuracies, with some expressing embarrassment over Google's handling of the situation.

Broader Industry Discussion

The discussion around biased AI extends beyond Gemini, with users also highlighting similar issues with other models like ChatGPT developed by OpenAI. Elon Musk weighed in on the conversation, discussing his AI model Grok, which is set to release upgraded versions soon. However, Grok has also faced scrutiny for sharing its name with the AI language processing chip Groq, leading to confusion within the industry.