A data rights protection advocacy group in Austria is launching a new privacy complaint at the prominent artificial intelligence (AI) developer OpenAI.
On April 29, Noyb filed a complaint citing that OpenAI has not corrected false information brought about by the agency's generative AI chatbot, ChatGPT. The group said such actions could, or rather inactions could breach privacy rules in the European Union.
The group's complaint in the case alleged the public figure, whose name was not mentioned, asked OpenAI's chatbot about him and consistently got wrong information.
OpenAI reportedly declined a request from the public figure to correct or remove the data, saying that it was not possible. It also declined to specify its training data and where its sourcing is obtained.
Maartje de Graaf, a Noyb data protection lawyer, commented on the case in a statement:
"So, if the system is not capable of providing accurate and transparent results, then it cannot be used to produce data on individuals. Technology follows legal requirements, not vice versa.".
Noyb took the complaint to the Austrian Data Protection Authority, asking it to investigate OpenAI's data processing. In particular, Noyb asked to verify how the organization is ensuring the correctness of personal data with its large language models.
"It is very clear: Currently, companies would be unable to make ChatGPT comply with EU law when processing data about individuals," De Graaf said.
The European Center for Digital Rights, or Noyb, is based in Vienna, Austria, for aiming to file "strategic court cases and media initiatives" in defense of European General Data Protection Regulation laws.
It wouldn't be the first time that either activists or researchers had called out chatbots across Europe.
In December 2023, research done by two European non-governmental organizations revealed that the Microsoft Bing AI chatbot, now rebranded to Copilot, was actually providing misleading or false information with regard to local elections vis-à-vis political elections in Germany and Switzerland.
The chatbot gave wrong answers on candidate information, polls, scandals, and voting while misquoting its sources.
Another example, although not in the EU, is that of Google's Gemini AI chatbot providing "woke" and inaccurate imagery in its image generator. Google had previously apologized over the incident, promising to change their model.