AI has been a hot button issue the past few years, being at the center of debates about art theft and a huge part of both SAG and WGA’s strikes. In early December, Google decided to throw its hat into the AI ring with Gemini, formally known as Bard. The goal was to compete against OpenAI’s ChatGPT software that skyrocketed in popularity in late 2022. But apparently in trying to address the issues it competition has, Google made some blunders that led to Gemini generating pictures of Black and Asian nazi soldiers.
On February 23rd, Google apoligized for the offensive images and attempted to explain what happened. “When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people,” wrote Prabhakar Raghavan Senior Vice President. “And because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”
Raghavan also explains that if users prompt Gemini with specific group of people, like “a Black teacher,” the program should reflect that. The issue is not all users will be that specific and machines don’t really understand nuance and context. For example; the prompt that generated some of these offensive images was “illustaion of a 1943 German soldier.” It also spat out some images of the founding fathers as people of color, which is obviously completely incorrect.
“So what went wrong? In short, two things,” writes Raghavan. “First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.”
While Google is pinning the issue on its attempt to be inclusive, conservative critics, namely Elon Musk, are decrying the software. Musk made one post on X (formally Twitter) calling Gemini “racist, anti-civilizational programming” and another calling it “sexist.” However, for full transparancey, he has his own competing AI pragram as a part of X, which he owns. So going after another program probably isn’t purely for political reasons.
For now, Google has turned off Gemini’s ability to generate images of people. While the company gave no timeline for when the feature may be restored. They did assure users that it will only be reinstated after “extensive testing.”