Employees at Google claim that the company’s rush to dominate the field of AI led to ethical lapses

TL;DR:

  • Google’s rush to compete in the field of AI has led to ethical lapses and concerns.
  • Despite Google’s pledge to focus on AI ethics, the development of its AI chatbot, Bard, has left the ethics team feeling disempowered and demoralized.
  • The prioritization of profit and growth over ethics has become a major concern for the development of generative AI technology.
  • Former Google manager, Meredith Whittaker, emphasizes the importance of prioritizing ethics in AI development.
  • Despite internal objections and warnings from employees, Google launched Bard to the public in March, prioritizing keeping up with the competition over ethical considerations.
  • Google’s senior leadership declared a competitive “code red” in December, changing its approach to risk and labeling new products as “experiments.”
  • Google’s AI governance lead, Jen Gennai, stated that the decision to launch Bard was not solely hers and was made by a group of senior leaders after a risk evaluation.
  • The development of AI technology has raised concerns about its impact on society, and Google’s advancements in generative AI have sparked internal debates about ethical development.
  • The ousting of ethical AI researchers Timnit Gebru and Margaret Mitchell marked a significant turning point for Google’s approach to ethical AI.
  • Despite challenges, some employees are still working hard to ensure ethical considerations are taken into account in AI product development.
  • As AI becomes more advanced, it is important for companies like Google to consider the ethical implications of their products and ensure they are acting in the best interests of society.

Main AI News:

In a race to remain ahead of the competition, Google made the decision to launch its AI chatbot, Bard, to the public despite internal objections and concerns raised by its own employees. According to screenshots of internal discussions, some workers deemed Bard to be unreliable and “cringe-worthy,” while others expressed that its responses could lead to dangerous and potentially life-threatening situations.

Despite Google’s pledge in 2021 to increase its focus on AI ethics and the assessment of potential harm, the rapid development of Bard has left the company’s ethics team feeling disempowered and demoralized, according to current and former employees.

The prioritization of profit and growth over ethics has become a major concern, as the development of generative AI technology could have a profound impact on society. Google’s goal of revitalizing its search business through cutting-edge technology may put generative AI into the hands of millions around the world, but it’s crucial to remember the ethical implications that come with this advancement. As AI technology continues to advance, it’s imperative that ethics are given the necessary consideration and attention to ensure that their development aligns with the well-being of society as a whole.

Meredith Whittaker, President of the Signal Foundation, which supports private messaging, and a former Google manager, emphasizes the importance of prioritizing ethics in the development of AI technology. Whittaker states, “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.” In the race to remain competitive, it’s crucial that companies like Google prioritize ethics and ensure that the development of AI technology aligns with the greater good of society.

Despite internal objections and warnings from its own employees, Google introduced its AI chatbot, Bard, to the public in March. Internal discussions, as evidenced by screenshots, revealed that some workers deemed Bard to be unreliable, with one even calling him a “pathological liar.” Others raised concerns about the potential for dangerous and life-threatening responses, such as advice for landing a plane that could lead to a crash or suggestions for scuba diving that could result in serious injury or death.

Despite Google’s commitment to ethics and AI safety, the company has prioritized keeping up with the competition and providing low-quality information over its ethical obligations. This can be seen in the rapid development of Bard, which was expedited by the November 2022 debut of rival OpenAI’s popular chatbot. The fast pace of development has left the company’s ethics team feeling disempowered and demoralized, with some workers stating they have been instructed not to impede the progress of generative AI tools.

Google’s goal is to revitalize its maturing search business through the use of cutting-edge technology, which could bring generative AI into millions of homes and devices around the world. However, this pursuit of growth and profit has come at the cost of ethical considerations, as the impact of AI technology on society could be profound.

Meredith Whittaker, former Google manager and president of the Signal Foundation, highlights the importance of prioritizing ethics in the development of AI technology. Whittaker states, “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.” It’s crucial for companies like Google to remember that the ethical implications of AI must not be overlooked in the pursuit of competitiveness and success.

Despite warnings from its employees and internal objections, Google launched its AI chatbot, Bard, to the public in March. Concerns were raised about the chatbot’s reliability, with some employees calling it a “pathological liar” and warning of the potential for dangerous and life-threatening responses. Despite Google’s commitment to responsible AI, the company has prioritized keeping up with the competition and providing low-quality information over its ethical obligations. This can be seen in the rapid development of Bard, which was expedited by the November 2022 debut of rival OpenAI’s popular chatbot.

In response to questions from Bloomberg, Google stated that responsible AI remains a top priority at the company. However, in December, senior leadership declared a competitive “code red” and changed its approach to risk. Google’s leaders decided to label new products as “experiments,” hoping that the public would forgive their shortcomings. To pick up the pace of product releases, some compromises in the company’s AI principles may be necessary, as advised by Jen Gennai, the AI governance lead.

Despite concerns raised by employees, Gennai overruled a risk evaluation submitted by members of her team and launched Bard to the public in March, labeling it as an “experiment.” One employee raised issues in an internal message group, stating that Bard was “worse than useless.” The note was viewed by nearly 7,000 people, many of whom agreed with the employee’s assessment of the chatbot’s unreliable and often incorrect responses. Despite these concerns, Bard was launched to the public, raising questions about the ethical considerations and compromises made in the pursuit of competitiveness and success in the AI industry.

Jen Gennai, the AI governance lead at Google, stated that the decision to launch Bard was not solely hers. After the team’s evaluation, she added to the list of potential risks and escalated the analysis to a group of senior leaders in product, research, and business. That group then determined that it was appropriate to move forward with a limited experimental launch, with ongoing pre-training, enhanced guardrails, and appropriate disclaimers.

The development of artificial intelligence technology has created new concerns about its societal effects, particularly in the realm of language models. These technologies can ingest large volumes of digital text and use that material to train software that predicts and generates content on its own. As a result, the products can risk perpetuating offensive, harmful, or inaccurate information.

Google’s recent advancements in generative AI have sparked internal debates about the ethical development of cutting-edge AI technology. The company had faced high-profile blunders in the past, including a 2015 incident when its Photos service mislabeled images of a Black software developer and his friend as “gorillas.” The challenge of developing AI in an ethical manner has long been a topic of internal debate, and the rapid pace of development has raised concerns among employees about the potential harm that may result from not having enough time to study these implications.

The Center for Humane Technology has pointed out that the number of researchers building AI far outweighs those focused on safety, highlighting the often isolating experience of raising concerns in a large organization. In a competitive industry, it’s crucial for companies like Google to prioritize both innovation and ethical considerations to ensure that the impact of AI on society is positive.

After the 2015 incident with the Photos service, Google established an ethical AI unit tasked with making AI fairer for its users. However, the ousting of AI researchers Timnit Gebru and Margaret Mitchell in 2020 and 2021, respectively, marked a significant turning point for the company. Gebru and Mitchell co-led Google’s ethical AI team until they were forced out over a dispute regarding fairness in the company’s AI research.

The departures of Gebru and Mitchell, along with Samy Bengio and other researchers, created a significant shift in the company’s approach to ethical AI. Despite public pronouncements about improving its reputation, some employees found it challenging to work on ethical AI at Google. One former employee reported being discouraged from working on fairness in machine learning and that it even affected their performance review.

Those who remained working on ethical AI at the company were left questioning how to do their work without putting their own jobs at risk. Nyalleng Moorosi, a former researcher at Google, stated that doing ethical AI work means slowing down the process, which can be a scary proposition.

Despite the challenges, the importance of ethical AI cannot be overstated. The development of AI technology is accelerating, and it’s crucial for companies like Google to prioritize both innovation and ethical considerations to ensure that the impact of AI on society is positive.

Despite being a pioneer in the field of AI research, Google has faced its fair share of challenges in balancing its competitive drive with ethical considerations. The recent “code red” directive, which placed a higher priority on the rapid release of AI products, has left some employees feeling disempowered and demoralized. The company’s commitment to responsible AI has been called into question, with former employees claiming that ethics reviews of products and features are mostly voluntary.

The company’s public image has also been impacted by high-profile incidents, such as the 2015 Photos service debacle and the ousting of ethical AI researchers Timnit Gebru and Margaret Mitchell. Despite efforts to reorganize the responsible AI team and improve its reputation, some employees have reported feeling discouraged in their attempts to work on fairness in machine learning.

Despite these challenges, some believe that Google’s increased focus on generative AI products, driven by competition with rival OpenAI, may bring positive changes to the company’s approach. The recent release of chatbot Bard, and its integration into Google’s other products, has placed the company in a competitive position in the AI market.

However, with the emphasis on rapid product release, some employees feel that it has become futile to raise concerns about the potential harm that AI products may cause. As the world of AI continues to evolve, it will be important for companies like Google to find a balance between competition and ethics to ensure the responsible development of cutting-edge technology.

Despite the challenges, some employees are still working hard to ensure that ethical considerations are taken into account in the development of AI products. The responsible AI team has tried to increase transparency around its work and make sure that ethical considerations are integrated into the company’s decision-making process. However, some employees feel that there is still a long way to go and that more needs to be done to support those working on ethical AI and to ensure that the company is held accountable for its actions.

As AI continues to become more advanced and ubiquitous, it is important that companies like Google take the time to consider the ethical implications of their products and ensure that they are acting in the best interests of society. It remains to be seen what the future holds for AI at Google, but for now, many employees are still working to ensure that the company remains committed to ethical principles and to ensuring that the products it develops are safe and trustworthy.

Conlcusion:

Google’s rush to dominate the field of AI has led to ethical lapses and concerns among employees. The prioritization of profit and growth over ethics has raised concerns about the impact of AI technology on society. Despite internal objections and warnings from employees, Google launched its AI chatbot, Bard, to the public, prioritizing competitiveness over ethical considerations.

Despite the company’s commitment to responsible AI, some employees feel disempowered and demoralized in their attempts to work on ethical AI. Despite these challenges, some employees are still working hard to ensure that ethical considerations are taken into account in the development of AI products. As AI continues to advance, it is important for companies like Google to prioritize ethics and ensure that the impact of AI on society is positive.

Source