Google's Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash.

Brought to you by Trickyenough

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

Brought to you by Trickyenough

One worker's conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.”

Brought to you by Trickyenough

One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Brought to you by Trickyenough

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg.

Brought to you by Trickyenough

The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology's potential harms.

Brought to you by Trickyenough

But the November 2022 debut of rival OpenAI's popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

Brought to you by Trickyenough