As Google doubles down on its development of Artificial Intelligence (AI) products, it’s made an announcement causing such concern about data privacy that reactions from consumers and policy makers could have wide-reaching impacts on future development and use of AI tools.
The tech giant is planning on integrating Bard, its ChatGPT-style AI assistant, into the Google Messages app, raising privacy concerns among security experts.
While an official release date has not been announced, when prompting Bard with the question “When is Google Bard coming to Google Message?” Bard replies, “Google Bard is expected to come to Google Messages sometime in early 2024. The exact date has not been announced yet, but there have been several beta versions of the app that have included Bard integration. The latest beta version was released on January 18, 2024, so it is possible that Bard could be released to the public in the next few months.“
Forbes reports that there were hints last year that the tech giant would bring Bard into Messages, but nothing was substantive. However, a blog called 9to5Google recently added more credibility to the rumors when it dove into the actual lines of pre-release code, leading the website to publish, “Bard is coming to Google Messages to ‘help you write messages, translate languages, identify images, and explore interests.’”
Bard is one of Google’s major AI responses to the popular ChatGPT program that has captured a massive user base. In less than a year after its launch, ChatGPT has garnered over 100 million weekly users, according to the CEO of OpenAI Sam Altman.
According to a report published by Forbes, Google’s strategy with Bard is to enhance its messaging app by using AI to understand conversational context, tone, and user interests, and will personalize responses based on relationship dynamics with different contacts. And this unique goal is why security experts are so worried about user privacy. The feature implies that Bard may analyze users’ private message history within the Message app — raising questions about potential leaks and hidden data sharing practices.
To address these concerns, Google has asserted that Bard’s analysis will be on-device only, which avoids sending data back to servers for cloud processing, Analytics Vidhya reports. Google assures that Bard will request user consent before accessing personal data, and any collected data will be stored for 18 months, with the option for manual deletion.
Despite this, there are still worries about potential data misuse. Apple is also working on integrating similar generative AI into its iOS devices, focusing on on-device processing in efforts to address similar privacy concerns.
As AI assistants become more prevalent in the workforce, the technology raises questions for organizations. While AI and its related software is found to improve productivity, the fact that there is very little policy and guardrails on AI is leading executives to question how they should approach the technology. The tech leads to more efficient and personalized communication, but it also currently requires careful consideration of privacy and data security to protect our increasingly digital workplaces.