UK – The Information Commissioner’s Office has published guidance on artificial intelligence (AI) and data protection, with the aim of establishing a framework to audit AI.
The guidance includes recommendations on best practice for organisations to mitigate the risks of using AI technology, with a focus on data protection compliance.
It will act as a framework for the ICO to audit AI applications’ processing of personal data and ensure that the regulator has measures in place to manage the ‘risks to rights and freedoms that arise from AI’, according to the guidance.
It is aimed at audiences focused on compliance, including data protection officers, and those specialising in tech, such as data scientists.
Simon McDougall, deputy commissioner – regulatory innovation and technology at the ICO, wrote in a blog: “Understanding how to assess compliance with data protection principles can be challenging in the context of AI.
“From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data, it is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.”
The guidance has been produced following two years of research and consultation by Reuben Binns, postdoctoral research fellow at the ICO, and the organisation’s AI team.
McDougall added: “It is my hope this guidance will answer some of the questions I know organisations have about the relationship between AI and data protection, and will act as a roadmap to compliance for those individuals designing, building and implementing AI systems.”