Governments’ efforts to regulate AI tools

Governments’ efforts to regulate AI tools

Italy’s data protection agency said on Wednesday it would lift its temporary ban on OpenAI’s ChatGPT artificial intelligence (AI) technology if the US company complied with data protection and privacy demands by end-April.

Rapid advances in AI such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree on laws governing the use of the technology.

Here are the latest steps national and international governing bodies are taking to regulate AI tools:


The government requested advice on how to respond to AI from Australia’s main science advisory body and is considering next steps, a spokesperson for the industry and science minister said on April 12.



Britain said in March it plans to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body.



China’s cyberspace regulator on April 11 unveiled draft measures to manage generative AI services, saying it wants firms to submit security assessments to authorities before they launch offerings to the public.

China’s capital Beijing will support leading enterprises in building AI models that can challenge ChatGPT, its economy and information technology bureau said in February.



The European Data Protection Board, which unites Europe’s national privacy watchdogs, said on April 13 it had set up a task force on ChatGPT, a potentially important first step toward a common policy on setting privacy rules on AI.

EU lawmakers are discussing introduction of the European Union AI Act that will govern anyone who provides a product or a service that uses AI. It will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.

Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable.



The country’s privacy watchdog CNIL said on April 11 it was investigating several complaints about ChatGPT after the chatbox was temporarily banned in Italy over a suspected breach of privacy rules.

France’s National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups that the technology posed a threat to civil liberties.



Italy imposed a temporary ban on OpenAI’s ChatGPT on March 31 after the national data agency raised concerns over possible privacy violations and for failing to verify that users were aged 13 or above, as it had requested.

On Wednesday, its data protection agency set an end-April deadline for OpenAI to meet its demands on data protection and privacy before the service can be resumed in the country.



Digital transformation minister Taro Kono said on April 10 he wants the upcoming G7 digital ministers’ meeting, set for April 29-30, to discuss AI technologies including ChatGPT and issue a unified G7 message.



Spain’s data protection agency AEPD said on April 13 it was launching a preliminary investigation into potential data breaches by ChatGPT.

The AEPD has also asked the EU’s privacy watchdog to evaluate privacy concerns surrounding ChatGPT, the agency told Reuters on April 11.



The Biden administration said on April 11 it is seeking public comments on potential accountability measures for AI systems. President Joe Biden had earlier told science and technology advisers that AI could help in addressing disease and climate change, but it was also important to address potential risks to society, national security and the economy.


Subscribe to our Newsletters

Enter your information below to receive our weekly newsletters with the latest insights, opinion pieces and current events straight to your inbox.

By signing up you are agreeing to our Terms of Service and Privacy Policy.