Corrections Staff Warned Over AI Use in Reports | RNZ News
- New Zealand’s Department of Corrections is grappling with the risks of artificial intelligence after discovering staff members were using the technology to draft official reports, prompting a firm...
- The issue came to light following a privacy risk assessment, according to a statement from Corrections.
- Chief Probation Officer Toni Stewart explained that the department’s current AI usage is intentionally constrained.
New Zealand’s Department of Corrections is grappling with the risks of artificial intelligence after discovering staff members were using the technology to draft official reports, prompting a firm warning against unauthorized use. The department has clarified its AI policy, limiting approved use to Microsoft Copilot and blocking access to other publicly available AI applications on its network.
The issue came to light following a privacy risk assessment, according to a statement from Corrections. While the department emphasizes that misuse of technology is taken “extremely seriously,” the incident highlights a growing concern among organizations about the potential for data breaches and compromised integrity when employees utilize AI tools without proper oversight. The specific reports affected were Extended Supervision Order reports, which contain sensitive information about individuals under supervision.
Chief Probation Officer Toni Stewart explained that the department’s current AI usage is intentionally constrained. “This ensures AI use at Corrections occurs within an environment where we can manage privacy and security controls,” she stated. The policy governing Copilot use explicitly prohibits entering personal information – including identifying details, health records, or information related to individuals managed by Corrections – into the chat function.
Despite the policy, some staff members circumvented the restrictions, leading to corrective action. Stewart confirmed that the department had identified “a small number of incidents” of non-compliant Copilot use and had addressed them accordingly. The department is now reinforcing the policy through ongoing communication and training, particularly within Community Corrections, where report writing is a significant part of the workload.
The rollout of Copilot itself has been relatively slow, with approximately 30 percent of Corrections staff engaging with the tool since its introduction in November 2025. Copilot is intended as an “assistive tool” for content creation and refinement, specifically for information that is not considered sensitive. The department has implemented auditing measures, making all prompts searchable and exportable to monitor compliance.
The incident at Corrections reflects a broader trend of organizations wrestling with the implications of rapidly evolving AI technology. Just this week, reports emerged of AI researchers leaving their positions at leading companies like Anthropic and OpenAI, voicing concerns about the potential for manipulation and the speed of development. One former Anthropic Safeguards Research team head warned, “The world is in peril,” while a departing OpenAI researcher expressed fears about the technology’s ability to manipulate users in ways that are currently difficult to understand or prevent.
The Privacy Commissioner’s Office has weighed in, stating that the Privacy Act applies to the use of personal information through AI tools. A spokesperson for the Office emphasized that agencies are responsible for understanding the technology they employ and ensuring its use aligns with privacy requirements. The Office indicated that, if Corrections’ policy is being adhered to, privacy concerns would be limited to instances where staff breach the established guidelines.
Corrections has not yet notified the Office of the Privacy Commissioner regarding the incidents, as of Friday. However, the department’s privacy team is collaborating with relevant work groups to provide further guidance on Copilot use within Community Corrections. The department maintains that any information entered into Copilot remains within its secure domain.
Stewart underscored the department’s commitment to protecting the privacy of those it works with and maintaining the integrity of its assessments and documentation. The department has appointed an AI assurance officer, reporting to the director of cybersecurity, to oversee the safe and secure adoption of AI and to provide regular updates to the government chief digital officer. Corrections also participates in an All of Government Community of Practice on AI, managed by the government chief digital officer, and has established an internal AI working group to embed safe and ethical practices.
The situation at Corrections serves as a cautionary tale for other government agencies and organizations considering the integration of AI tools. While AI offers potential benefits in terms of efficiency and productivity, it also introduces significant risks related to data privacy, security, and the accuracy of information. A robust policy framework, coupled with ongoing monitoring and employee training, is crucial to mitigating these risks and ensuring responsible AI adoption.
