
On April 1, users of ChatGPT in Italy were struck with very-much-not-an-April-Fools’-Day-joke kind of news: Italy’s privacy regulator had called for an immediate temporary ban on the OpenAI-developed language model. Many sources (e.g. Reuters (News - Alert), AP News, The Wall Street Journal, TechCrunch+ and BBC news; just to name the small handful I happened to comb through) believe the catalyst for this block was a data breach that’s currently being investigated – the European Union’s (EU) data protection rules may have been violated, according to the Italian Data Protection Authority (IDPA).
The order from Italy’s privacy regulator, known as “the Garante” (found here) references several areas of serious concern. Due to its “absence of [any] legal basis that justifies such a massive collection and storage of personal data,” the Garante granted OpenAI a total of 20 days to communicate new measures that will be taken to comply with its alleged violation. (Note: If OpenAI were to neglect this order, it could face a fine of up to 20 million euros (i.e. $21.68 million) or 4% of its annual worldwide turnover.
Amidst a burgeoning “anti-AI” push (including the recent open letter written to AI labs, asking for a six-month pause on AI systems training), this Garante directive is the first nation-scale measure (i.e. from any country) restricting access to artificial intelligence. The Garante also pointed to “the lack of any age verification system,” raising child safety flags.
To that point, with the proliferation of AI models has come calls for transparency. On the HBO show “Last Week Tonight,” host John Oliver and his team wrote, “AI systems need to be explainable, meaning that we should be able to understand exactly how and why an AI came up with its answers.” Though this largely referred to AI black box technology, it speaks to the thorough and transparent efforts many believe should be taken to securely regulate tools like ChatGPT as they evolve; for the safety of all users, and to lay the groundwork for responsible future progress.
In response, OpenAI claimed it believes ChatGPT and its newest GPT-4 developments do, in fact, comply with the EU’s privacy laws.
“We will work closely with Italy’s privacy regulator with the hope of making ChatGPT available again there very soon,” an OpenAI representative wrote.
“We, too, believe in the regulation of AI. It’s necessary,” they added, “so we look forward to educating on how our systems are built and used.”
Let’s go back to that potential fine, though. Less about the monetary amount and more specifically about the source, this is the maximum fine under the EU’s General Data Protection Regulation (GDPR). The GDPR is an established component of EU privacy law and human rights law, and this ChatGPT restriction is fundamental in showing how the GDPR is willing (and able) to offer tools to regulators (like the Garante) to be actively involved in shaping AI’s place in humanity’s future.
And in that vein, privacy regulators from France, Ireland, Sweden and Germany have already reached out to their Italian counterparts to learn more about the ban’s basis and what may happen next.
Lastly – and about the catalystic data breach itself – is the notable fact that, last month, an Italian investigation was launched after ChatGPT users were allegedly “being shown excerpts of other users’ ChatGPT conversations, as well as their financial information.” And these user chatlogs (i.e. conversation histories) and credit card details may not have been the only leaks; some are claiming that first and last names and email addresses may have been exposed.
Even if that covers, say, 1% of ChatGPT users, it still raises eyebrows and concerns, per the Garante.
So while OpenAI works with privacy regulators and simultaneously works to rebuild its sense of trust with individual users and organizations alike, there are questions aplenty to consider: Was the Garante’s action on the mark, or was it perhaps too excessive? Per the aforementioned open letter discourse, would an “AI pause” legitimately resolve what people claim could herald profound societal risks, or should other regulations be considered, instead? And is there a way for us to draw a concrete line between next-gen leniency and wholesale obstruction, in regard to generative technologies?
While such questions certainly aren’t rhetorical, their answers are still currently afloat in the digital ether.
More coverage will be released as this probe into AI continues.
Edited by
Greg Tavarez