Technology Giants Struggle to Regulate Widespread Use of AI Chatbot Program
Major players in the tech industry, including Microsoft and Google, are grappling with the challenges posed by the extensive adoption of ChatGPT, a chatbot program powered by generative artificial intelligence (AI). This tool engages in conversations with users and responds to a range of prompts, raising concerns about potential leaks of intellectual property and sensitive strategic data.
In real-world scenarios, individuals have seamlessly integrated ChatGPT into their daily work routines, employing it for tasks such as drafting emails, summarizing documents, and conducting preliminary research.
An online survey conducted on artificial intelligence (AI) from July 11 to 17 revealed that 28% of participants reported regular usage of ChatGPT in their workplaces. Interestingly, only 22% indicated that their employers explicitly permitted the use of external tools like ChatGPT.
The Reuters/Ipsos poll, encompassing 2,625 adults in the United States, boasted a credibility interval indicating a precision measure of approximately 2 percentage points.
Among the respondents, 10% revealed that their employers explicitly prohibited the use of external AI tools, while roughly 25% remained uncertain about their company’s stance on this technology.
Since its launch in November, ChatGPT has experienced unprecedented growth rates. However, it has generated both excitement and concerns, leading its developer, OpenAI, into conflicts with regulators, especially in Europe. Criticism from privacy watchdogs has centered around OpenAI’s extensive data collection efforts.
It’s important to note that human reviewers from various organizations could potentially access the content generated by ChatGPT. Research has also indicated that similar AI systems could replicate absorbed training data, raising the potential risk of compromising proprietary information.
The understanding of data usage in generative AI services remains limited among users. Ben King, Vice President of Customer Trust at corporate security firm Okta, emphasized the critical nature of addressing this knowledge gap for businesses. Many AI tools lack formal contracts due to being free services, making risk assessment a challenging endeavor for corporations.
OpenAI has refrained from commenting on the implications of individual ChatGPT usage. However, the company has assured corporate partners in a recent blog post that their data will not be used for further chatbot training without explicit permission.
In the case of Google’s Bard, user data, including text, location, and usage patterns, are collected. Google allows users to delete past activities and request the removal of data fed into the AI. Alphabet, Google’s parent company, declined to provide additional details when questioned.
Microsoft has yet to respond to requests for comments.
ChatGPT, an AI-powered chatbot, has found its way into workplaces, with some employees employing it for seemingly harmless tasks, even if their companies do not officially endorse its use. An individual from Tinder in the United States revealed that ChatGPT was being utilized for tasks like composing emails and creating light-hearted calendar invites, despite the platform’s lack of official support for its use.
While Tinder has explicitly prohibited the use of ChatGPT, some employees are using it in a way that doesn’t reveal their affiliation with the company. The exact nature of how Tinder employees are using ChatGPT could not be independently verified by Reuters.
Samsung Electronics took a different approach by banning its employees from using ChatGPT and similar AI tools globally. This decision followed the discovery that an employee had uploaded sensitive code to the platform. The company is currently exploring measures to create a secure environment for generative AI usage to enhance productivity and efficiency.
Some companies are embracing ChatGPT and similar platforms while exercising caution to ensure security. Coca-Cola, for instance, has begun testing how AI can enhance operational effectiveness and recently launched an enterprise version of Coca-Cola ChatGPT for internal productivity improvement.
Similarly, Tate & Lyle, a global ingredients manufacturer, is experimenting with ChatGPT in a secure manner across various departments. Questions are being raised about how it can be best utilized, whether in investor relations, knowledge management, or task efficiency enhancement.
Conversely, some employees face blanket bans on accessing the platform through company computers. A Procter & Gamble employee mentioned that ChatGPT is entirely prohibited on their office network.
Paul Lewis, Chief Information Security Officer at Nominet, emphasized the importance of caution, given potential security risks. He pointed out vulnerabilities tied to “malicious prompts” capable of coercing AI chatbots into divulging sensitive information. While a complete ban may not be necessary, careful consideration is essential.