What’s going on:
Many U.S. workers are using OpenAI’s ChatGPT to assist with routine tasks such as email drafting and preliminary research. This is despite concerns from employers like Microsoft and Google regarding potential leaks of intellectual property and strategies, according to Reuters.
A Reuters/Ipsos poll found that 28% of respondents use ChatGPT regularly at work, while only 22% stated that their employers explicitly permit the use of such tools. Tech companies, like Tinder and Samsung Electronics, have policies restricting or banning ChatGPT due to security concerns.
Why it matters:
The widespread use of ChatGPT and similar AI tools in the workplace emphasizes the efficiency and assistance they provide to employees. However, the potential risks, such as exposing proprietary information, create a dilemma for corporations. Security experts warn that generative AI models could unintentionally reproduce data absorbed during training, posing risks to corporate intellectual property. This tug-of-war between productivity and security is reshaping corporate attitudes and policies towards AI tools.
How it’ll impact the future:
As AI tools become more widely adopted by the workforce, there will most likely be a shift in how employers approach data security and employee productivity. It’s possible that there could be an increase in corporate versions of AI tools with enhanced security features, tailored to fit the needs of the organization. However, the balance between innovation and security will be a challenging one to navigate. Employees might expect more AI assistance in everyday tasks to stay competitive, but this would add to the security challenges that companies would have to address in order to protect employees from lawsuits and privacy concerns.