Korean Police Warn Against Using ChatGPT for Sensitive Case Information
The Korean National Police Agency (KNPA) has issued a firm warning to all police stations nationwide, advising against entering investigative details or work-related data into generative AI tools such as ChatGPT.
This directive aims to prevent the risk of sensitive information leaks, particularly concerning personal data and confidential case matters.
Are Police Using ChatGPT Despite Risks?
Despite the caution, use of generative AI like ChatGPT has become common among younger officers.
An investigative official, surnamed Kim, told The Korea Herald,
“Using generative AI like ChatGPT has become common among many young police officials in recent years.”
He added that tools like ChatGPT have been helpful for reviewing laws and past investigations, sometimes shedding light on cases officers might not be fully familiar with.
What Does the KNPA’s Notice Say About AI Usage?
The agency’s official notice, titled “Precautions when using generative AI tools such as ChatGPT,” stresses that officers must not input any investigative information, work data, or personal details into such AI platforms.
It also advises police to avoid answering security-related questions posed by generative AI and to apply strict review processes when considering AI for IT-related investigative work.
Which Other Countries Are Restricting ChatGPT Use in Official Work?
South Korea is not alone in limiting the use of generative AI tools like ChatGPT for official purposes.
Several governments worldwide have introduced similar restrictions to protect sensitive data.
For instance, Italy temporarily banned ChatGPT in 2023 over privacy concerns, while France has also issued guidelines restricting its use within government agencies.
In January 2025, India’s Finance Ministry warned employees against using AI tools like ChatGPT on official devices due to concerns over government data confidentiality.
In the United States, various federal departments have issued warnings or limited use of public AI tools when handling classified or sensitive information.
These measures reflect a growing global caution about data security risks linked to public AI platforms in official environments.
How Has the KNPA Previously Used AI?
Back in March 2023, the KNPA revealed plans to collaborate with ChatGPT to assist in drafting English documents for handling cross-border crime.
At the time, the agency assured that no confidential information or personal data would be entered into the system, limiting AI use strictly to language assistance.
Why Is There Growing Concern Over Data Security?
Senior police officials have voiced worries over the potential exposure of sensitive investigation data when transferred outside secure police networks.
One source told JoongAng Ilbo,
“If investigation records from an internal police network are transferred to external AI-generated websites, the risk of personal information leakage or the exposure of confidential investigation details increases.”
They compared refining reports via ChatGPT to letting critical investigative details slip beyond police control.
What Is the KNPA Doing to Develop Safer AI Solutions?
To tackle these concerns, the KNPA is working with LG CNS since early 2025 to create a dedicated generative AI model called Exaone, tailored specifically for police use.
This internal AI system is designed to securely assist officers by summarising witness statements, identifying similar past cases, and analysing investigations to highlight key issues.
It will also support drafting official documents, all within a protected police network to avoid external data leaks.
The development of this AI-backed investigative tool aims to combine efficiency with tighter control over sensitive information, offering a safer alternative to public AI services like ChatGPT.
Once fully implemented, it will be integrated directly into the police force’s internal systems to help officers manage cases with improved focus and security.