Italy is the First Nation to Ban ChatGPT: What Happens Next?

By Alex Passett April 03, 2023

On April 1, users of ChatGPT in Italy were struck with very-much-not-an-April-Fools’-Day-joke kind of news: Italy’s privacy regulator had called for an immediate temporary ban on the OpenAI-developed language model. Many sources (e.g. Reuters (News - Alert), AP News, The Wall Street Journal, TechCrunch+ and BBC news; just to name the small handful I happened to comb through) believe the catalyst for this block was a data breach that’s currently being investigated – the European Union’s (EU) data protection rules may have been violated, according to the Italian Data Protection Authority (IDPA).




The order from Italy’s privacy regulator, known as “the Garante” (found here) references several areas of serious concern. Due to its “absence of [any] legal basis that justifies such a massive collection and storage of personal data,” the Garante granted OpenAI a total of 20 days to communicate new measures that will be taken to comply with its alleged violation. (Note: If OpenAI were to neglect this order, it could face a fine of up to 20 million euros (i.e. $21.68 million) or 4% of its annual worldwide turnover.

Amidst a burgeoning “anti-AI” push (including the recent open letter written to AI labs, asking for a six-month pause on AI systems training), this Garante directive is the first nation-scale measure (i.e. from any country) restricting access to artificial intelligence. The Garante also pointed to “the lack of any age verification system,” raising child safety flags.

To that point, with the proliferation of AI models has come calls for transparency. On the HBO show “Last Week Tonight,” host John Oliver and his team wrote, “AI systems need to be explainable, meaning that we should be able to understand exactly how and why an AI came up with its answers.” Though this largely referred to AI black box technology, it speaks to the thorough and transparent efforts many believe should be taken to securely regulate tools like ChatGPT as they evolve; for the safety of all users, and to lay the groundwork for responsible future progress.

In response, OpenAI claimed it believes ChatGPT and its newest GPT-4 developments do, in fact, comply with the EU’s privacy laws.

“We will work closely with Italy’s privacy regulator with the hope of making ChatGPT available again there very soon,” an OpenAI representative wrote.

“We, too, believe in the regulation of AI. It’s necessary,” they added, “so we look forward to educating on how our systems are built and used.”

Let’s go back to that potential fine, though. Less about the monetary amount and more specifically about the source, this is the maximum fine under the EU’s General Data Protection Regulation (GDPR). The GDPR is an established component of EU privacy law and human rights law, and this ChatGPT restriction is fundamental in showing how the GDPR is willing (and able) to offer tools to regulators (like the Garante) to be actively involved in shaping AI’s place in humanity’s future.

And in that vein, privacy regulators from France, Ireland, Sweden and Germany have already reached out to their Italian counterparts to learn more about the ban’s basis and what may happen next.

Lastly – and about the catalystic data breach itself – is the notable fact that, last month, an Italian investigation was launched after ChatGPT users were allegedly “being shown excerpts of other users’ ChatGPT conversations, as well as their financial information.” And these user chatlogs (i.e. conversation histories) and credit card details may not have been the only leaks; some are claiming that first and last names and email addresses may have been exposed.

Even if that covers, say, 1% of ChatGPT users, it still raises eyebrows and concerns, per the Garante.

So while OpenAI works with privacy regulators and simultaneously works to rebuild its sense of trust with individual users and organizations alike, there are questions aplenty to consider: Was the Garante’s action on the mark, or was it perhaps too excessive? Per the aforementioned open letter discourse, would an “AI pause” legitimately resolve what people claim could herald profound societal risks, or should other regulations be considered, instead? And is there a way for us to draw a concrete line between next-gen leniency and wholesale obstruction, in regard to generative technologies?

While such questions certainly aren’t rhetorical, their answers are still currently afloat in the digital ether.

More coverage will be released as this probe into AI continues.




Edited by Greg Tavarez
Get stories like this delivered straight to your inbox. [Free eNews Subscription]
SHARE THIS ARTICLE

Bill Dunnion Joins the Team: Mitel Appoints New CISO to Oversee Security Strategies

Mitel has announced the appointment of Bill Dunnion as Chief Information Security Officer (CISO).

Read More

Singtel Teams with Vonage to Drive Global Enterprise and Telco Innovation

Singtel announced a strategic partnership with cloud communications giant Vonage to fuel innovation and scalability for enterprises and telecommunications providers.

Read More

Broadvoice Expands Channel Partner Program in CCaaS Market with Veteran CX Hires

Broadvoice, a provider of omnichannel contact center and unified communication solutions for SMBs and business process outsourcing firms, expanded its growing Channel Partner Program in the CCaaS market.

Read More

LEAP Boosts Global Customer Reach with Vonage SIP Trunking API Integration

By tapping into Vonage's Communications APIs, LEAP aims to revolutionize customer connectivity and streamline operations for businesses across Southeast Asia.

Read More

Navigating Tax and Compliance with SkySwitch at Annual Vectors Conference

SkySwitch, a BCM One company and premier white-label UCaaS platform provider, held its annual SkySwitch Vectors 2024 event this week. One session that took place specifically covered tax and compliance regulations and how partners and other resellers can proactively benefit.

Read More