Home » Business » ChatGPT Conversations Online: Data Erasure Efforts Fail – The Irish Times

ChatGPT Conversations Online: Data Erasure Efforts Fail – The Irish Times

AI Conversations exposed:⁣ Google⁣ Search⁢ Reveals ⁣Sensitive Data Shared ‌with ChatGPT

The ease with which sensitive facts⁤ shared with ChatGPT can become publicly accessible has been laid bare ⁤by ​a recent report, highlighting the ⁣risks ‍for individuals and organisations alike. The Digital ‌Digger inquiry⁤ revealed examples ranging from a ⁢multinational discussing building a hydroelectric facility, ‌to an Egyptian using AI to ⁢critique ​their government, ‌and ‌a researcher documenting academic fraud – all ⁢potentially exposed through a​ flaw in ChatGPT’s settings.

ChatGPT’s Search Engine Visibility Issue

at the end of ​last month, it emerged that thousands of ChatGPT users inadvertently made their‌ conversations ‌visible to search engines like Google. This occurred because users were essentially ticking a box without fully understanding ⁢the implications.OpenAI ‍quickly ‍disabled the​ feature, described as a “short-lived experiment” by a senior executive, and “scrubbed” approximately 50,000 conversations.

Experts emphasize the difficulty​ of ⁣truly erasing information once it’s been ‍made public online. The incident ‌also underscores a broader issue: a⁤ lack of AI literacy among the growing number of people using these tools.

The Potential Consequences of Exposed AI Conversations

The consequences ‍of this exposure extend far beyond mere embarrassment. Pradeep Sanyal, a San Francisco-based AI strategy advisor to company ⁣boards and CEOs,‌ warns of ⁢important risks.

“Some ⁢posts reveal commercially sensitive‌ information, legal strategies, political dissent in authoritarian contexts and confidential ⁢personal matters ⁣such as health conditions,” he said.‍ “These could ‍lead to ‍reputational damage, competitive disadvantage, regulatory ‌scrutiny⁣ or even personal safety risks ⁢depending on jurisdiction.”

Sanyal points to specific examples,such as a lawyer discussing strategies to displace ⁢indigenous communities ‌at the ⁢lowest ⁤possible cost. “This is not ​merely embarrassing; it could have legal and‌ ethical implications.”

AI and Data ⁢Privacy: Lessons from‍ the Early⁣ Days of Email

Barry Scannell, ​an AI law and policy partner at William Fry solicitors and a member ⁤of the Government-appointed AI Advisory Council, expressed shock at the material made accessible through routine Google⁢ searches.

He noted the⁣ incident highlights the types of information people are sharing with AI tools – ranging⁣ from commercially ​sensitive business strategies to deeply personal disclosures, with some ‍users ⁤treating AI as a therapist.

“Some of ⁤this reminds me of the early days of electronic interaction ‍when some people were very flippant about what they put in emails and that was sometimes shown up⁢ in revelation processes,” Scannell⁤ said. “It is indeed certainly‌ another reminder ​of the importance⁢ of companies‌ having very clear processes ‌and policies when ⁤it comes ‍to their‌ people using this technology.”

The ⁣incident serves as a crucial​ warning about the need for greater awareness and caution when interacting with AI platforms, ​and the ⁢importance of‍ robust⁢ data privacy policies ​for organisations utilizing these ⁤technologies.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.