Google lets AI depict people again after diversity-borked images
Google is allowing its artificial intelligence bot to generate images of people once more, following a viral incident in February where historically inaccurate images were produced. This time, Google has implemented additional restrictions to ensure more accuracy and sensitivity.
New Image Generation Model
The latest image generation model, Imagen 3, will be integrated into Google's Gemini AI model in the upcoming days. This new model will focus on creating images of people but without supporting photorealism. Google announced these updates in a blog post on August 28. Imagen 3 will specifically prohibit the generation of images featuring identifiable individuals, minors, or any content that is excessively violent, gory, or sexual.
Previous Issues
Previously, Google had to suspend Gemini's ability to generate images of people due to the circulation of historically inaccurate images. Examples included Nazi-era German soldiers and America's Founding Fathers presented as people of color. The move was criticized by online commentators, with some accusing Google of being overly "woke." Even Elon Musk, the founder of xAI, a rival AI firm, weighed in with concerns about the implications of AI diversity programming.
Google acknowledged that while the diversity of images generated by Gemini was positive for its global user base, there were clear missteps. The company pledged to listen to user feedback and continuously improve the AI model.
Introduction of "Gems" Feature
Google also introduced the "Gems" feature to its Gemini chatbot, enabling users to create custom chatbots. This feature was unveiled at the Google I/O conference in May. Similar to OpenAI's custom GPTs, Gems can be tailored to specific prompts with detailed instructions. Users can refine these chatbots for various tasks such as software code reviews, language tutoring, or writing editing.