Privacy in the age of intelligent machines: OPC’s new guidance on safeguarding privacy when using generative AI

  • Legal update

    06 June 2023

Privacy in the age of intelligent machines: OPC’s new guidance on safeguarding privacy when using generative AI Desktop Image Privacy in the age of intelligent machines: OPC’s new guidance on safeguarding privacy when using generative AI Mobile Image

The Office of the Privacy Commissioner (OPC) has recently released guidance on its expectations around New Zealand agencies using generative AI — a type of AI capable of using large amounts of information to create and transform various forms of content (such as text, images and videos). Examples of some of the more popular generative AI tools include OpenAI’s ‘ChatGPT’, Google’s Bard, and Microsoft’s Bing search or Copilot products. 

Privacy risks

Given the rapid evolution and accessibility of these AI tools, the OPC emphasised a need to exercise caution and carefully consider the privacy risks and consequences before deciding to use generative AI tools. Some of the key privacy risks associated with generative AI include:

Training data: Generative AI tools are trained on large amounts of information, and this may include personal information. Processing personal information in such a manner raises concerns about the transparency of how this information is collected, and whether the information is accurate or impartial. The OPC advised against using sensitive or confidential data in training generative AI models.

Confidentiality: Agencies should consider the risk of entering personal or confidential information as prompts into generative AI tools. There is a risk that this information could be used, retained or disclosed by the tool provider without first obtaining proper consent (or otherwise in an unauthorised manner).

Accuracy of output: Generative AI tools can produce, in an often confident manner, content with errors and biases. To ensure accuracy of the tool’s output, fact-checking and ensuring correctness is crucial.

Access and correction: Generative AI tools may pose challenges to an individuals' rights under the Privacy Act 2020 (Privacy Act) to access and correct their personal information.

OPC’s position and expectations 

As the Privacy Act is technology neutral, and takes a principle-based approach, it is important to note that generative AI receives no special treatment under the Privacy Act. Any personal information collected, used, disclosed or otherwise processed in connection with the use of generative AI tools will be covered by, and must comply with the Privacy Act. 

The guidance sets out the OPC’s expectations of organisations considering implementing or using a generative AI tool in their business. These seven points of advice are designed to assist organisations engage with the potential of AI in a way that respects individual privacy rights, and are as follows: 

  • have senior leadership approval;
  • consider the necessity and proportionality of using the tool;
  • conduct a Privacy Impact Assessment prior to using the tool;
  • be transparent about use;
  • have the right procedures in place to enable access and correction of information;
  • perform human reviews of AI output, and 
  • ensure personal information is not retained or disclosed by the tool. 
Our view

We see the OPC’s guidance as an important first step in monitoring and potentially regulating what is a highly complex and developing area of technology, and we agree with the OPC’s sentiment that it will need to continually review the risks and developments surrounding generative AI as they arise. We are also encouraged by the OPC’s call for New Zealand regulators to come together to determine how best to protect New Zealanders’ rights when it comes to the use of generative AI. It will be interesting to see if the New Zealand Government seeks to follow other jurisdictions, like the EU and Australia, in proposing legislation or rules to regulate the use of AI to ensure it is safe, secure and reliable. 

What next? 

Navigating the various legal and ethical complexities associated with the use generative AI tools will be important for organisations to effectively manage while harnessing the benefits of these technologies. This will require organisations to undertake thorough privacy impact assessments, conduct suitable due diligence on service providers, ensure contractual terms with service providers are fit for purpose and appropriately address security and privacy legal risks, update privacy policies (and obtain authorisation where necessary) to address any new uses of personal information, and ensure adequate training is provided to staff on acceptable use of the tools. 


If you would like assistance with managing the legal risks and ensuring compliance with the OPC’s guidance or have any other concerns or queries relating to the use of AI tools, please contact one of our experts. 


This article was co-authored by Luke Han, a solicitor in the Technology, Media and Telecommunications team.