Google Warns Employees About Using its Own Chatbot

What’s new: Alphabet Inc., the parent company of Google, has warned its employees against entering confidential information into chatbots like OpenAI’s ChatGPT or Google’s own AI chatbot, Bard. This is due to fears of potential data leaks.

Why it matters: This caution comes as AI chatbots continue to grow more prevalent and sophisticated, raising new security and privacy concerns. Samsung, for instance, confirmed last month that its internal data was leaked after staff used ChatGPT.

Who’s involved: Google isn’t the only major tech player issuing such warnings. Amazon previously urged employees not to share code or any confidential information with ChatGPT, while Apple has forbidden its staff from using ChatGPT and the Microsoft-owned GitHub Copilot, an AI code writer.

Details: Google’s Bard, released in March as a competitor to ChatGPT, was built with Google’s in-house artificial intelligence engine, LaMDA. Sundar Pichai, Google’s CEO, encouraged employees to test Bard. However, the launch of Bard in the European Union was recently delayed over privacy concerns raised by Irish regulators.

What’s next: As chatbots become more ingrained in everyday tech usage, big tech companies like Alphabet, Amazon, and Apple will need to balance the benefits of these tools with the potential risks to data security and privacy.

Leave a Comment