Share

Google CEO Sundar Pichai/ Photo by World Economic Forum/ CC BY-NC-SA 2.0

Sometimes the most prominent companies make unforced errors that can harm their reputations, and be long-lasting in the minds of customers, consumers and employees. It can also serve as a hindrance in the recruitment of future talent.

There have been a number of recent controversies surrounding Google’s artificial intelligence products that have provided “hallucinations”—or misleading results—for users. On social media, people shared screenshots of the search engine’s new “AI Overview” feature telling users they could use glue to stick cheese to pizza and eat one rock per day.

After the AI-generated results went viral, Google reportedly scrambled to manually remove specific searches. A company spokesperson told the Verge that the questionable answers were appearing on “generally very uncommon queries, and aren’t representative of most people’s experiences.”

Alphabet CEO Sundar Pichai said in a video interview with the media outlet last week that the issue of hallucinations remains an “unsolved problem.”

Damage Done To Google’s Brand

The responses generated by Google’s AI Overview highlight the technology’s biases in training data, its inability to detect satire and harmful misinformation disseminated to its users conducting search queries.

This is not the first time this year that a Google AI product rollout has come under fire due to erroneous outputs. Its Gemini image generation capability drew online fury in February for producing historically inaccurate and offensive images, such as depicting Black Vikings, as well as racially diverse Nazi soldiers and Founding Fathers of America.

Facing backlash, Google issued an apology and temporarily suspended Gemini’s ability to generate images of people. In the aftermath of the controversy, Pichai sent an internal memo acknowledging that Gemini’s responses “offended users and exhibited bias,” stating unequivocally that such behavior is “completely unacceptable and a mistake on our part.”

The chief executive added, “We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.”

Google’s AI blunders significantly tarnish its reputation as a technology leader, eroding public trust, sparking controversies around bias and censorship and raising doubts about its ability to develop responsible and reliable AI that avoids unintended societal harms.

The inaccurate outputs from Gemini have fueled accusations that Google is injecting its own ideological biases into its AI tools and engaging in censorship of certain viewpoints.

An Overcorrection

Google acknowledged that Gemini was “overcompensating” for diversity by injecting people of various races into prompts where it was historically inaccurate. “Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” wrote Prabhakar Raghavan, a senior vice president at Google, in a company statement.

Raghavan admitted “the model became way more cautious than we intended” in trying to increase representation, a mistake he calls “embarrassing and wrong” historically revisionist outputs.

The massive datasets used to train these AI models may contain inherent biases and stereotypes present in online content and historical data. This can lead to the models producing biased or offensive outputs, like inaccurate depictions of historical figures or events.

In trying to proactively correct for these biases, Google appears to have overcorrected by implementing overly restrictive filters or prompts in their AI systems.

Current AI models lack true comprehension of complex societal contexts, nuances and implications of their outputs. Their responses can come across as tone-deaf, inconsistent or oblivious to real-world sensibilities.

As these language models become larger and more complex, it is extremely challenging to have fine-grained control over their responses while avoiding unintended consequences or controversial outputs.

This is not just a fluke, but reflects the inherent difficulties in deploying these powerful but flawed models in a safe and responsible manner. The issue Google is currently contending with exemplifies the broader dilemma facing the AI industry.

Organizational Changes Amid The AI Arms Race

Earlier this year, Google laid off hundreds of employees across various functions. Pichai told staff that “some teams will continue to make specific resource allocation decisions throughout the year where needed, and some roles may be impacted.”

The CEO told investors in an earnings call, “Teams are working to focus on key priorities and execute fast, removing layers and simplifying their organizational structures.”

The speed of execution could be part of the problem, according to a former Google employee. In a LinkedIn post, Scott Jensen, who was a senior UX designer at the company, said Google has been operating in “a stone cold panic that they are getting left behind.” Jensen added, “The fear is that they can’t afford to let someone else get there first.”

In an April blog post, entitled “Building for our AI future,” Pichai elaborated on the restructuring that was also taking place at the company to build responsible AI. “We need to be the best in class at deploying accurate, trustworthy, and transparent AI products for users and customers. To help do this, we’re making changes to the way our Responsible AI teams work at crucial points in the development cycle,” he wrote.

The changes included moving Responsible AI teams in Research to Google DeepMind to be closer to where the models are built and scaled, as well as pivoting other responsibility teams into its central Trust and Safety team, where Google is investing more in AI testing and evaluations.

Pichai said, “These shifts create clearer responsibility and accountability at every level as we build and deploy, and strengthen the feedback loop between models, products and users.” The chief executive ensured this is “vital, ongoing work.”

Source: Forbes

Find your next role here

Wecruiter.jobs

Career Coach Gurus

Find your personal career coach here