28 Jan 2025
Blog by Sophia Ignatidou, Group Manager for AI Policy at the Information Commissioner's Office
From unlocking medical breakthroughs to revolutionising public services in the UK, AI has huge potential to transform our lives for the better. Across the economy, businesses seem excited about investing in AI to drive growth and innovation, while the tech is inspiring vibrant discussions amongst policymakers and the public alike. Earlier this month, the Government committed to fast-tracking AI adoption in a new ambitious action plan. AI is really at the top of everyone’s agenda this year – including ours.
My team has been working on AI for a while now, striving to empower and inform people about their rights while supporting organisations to use personal data responsibly and confidently.
It comes as no surprise that the opportunities and challenges of AI are also the theme for Data Protection Day today. As the UK regulator, we are here to help organisations embrace the opportunities of AI in a way that keeps people safe.
We also have a responsibility to steer the ship when it comes to a wave of AI challenges by tackling emerging issues and risks at speed. For example, we published our policy positions on generative AI at the end of 2024, following a series of consultations throughout the year.
But like with any novel technology, the excitement and enthusiasm about AI has been met with many myths and misconceptions. While our consultation series highlighted several prevalent misunderstandings from AI developers, we also know that people want more clarity about their own rights when it comes to AI and its impact. To make sure we’re all on the same page, I’ve tackled some of these fundamental misconceptions below.
MYTH #1: People have no control over their personal data being used to train AI
FACT: People’s rights over their own personal data haven't changed just because technology is evolving. We’ve heard from people who are worried they don’t have a choice over whether their data is used to train AI, or they think a company might be using their data, but they don’t understand how.
Everybody has rights over how their own personal data is used to develop or deploy AI. In many circumstances, people have the right to object to their data being used in a way that they are not comfortable with, or if they simply don’t want it to be processed.
We’re also here to step in if we see an approach from an organisation that doesn’t look right – for example, we have previously raised our concerns about how LinkedIn and Meta were seeking to train AI models. We engage with many firms to ensure there are proper safeguards in place to protect people’s information – including making it as simple as possible to object.
MYTH #2: AI developers can use people’s data without being transparent about it
FACT: Being open and honest with people is not optional or an afterthought. We’ve been clear that any organisation seeking to use people’s data to train their AI models must be transparent about their activities from the start.
Our consultation series revealed a lack of transparency across the industry, which needs to change. You must be able to trust that your rights will be respected in order to support the use of AI in new products and services. Firms should be ensuring that people are provided with clear information in advance of using their personal data to train AI models and give them ample time and provide simple processes to object if they wish.
MYTH #3: Data protection doesn’t apply if AI developers did not intend to process people’s personal data
FACT: Some AI developers told us they “did not mean to” process people’s data, or they were using this data purely incidentally. Our position is that an organisation’s intention doesn’t have any bearing on their legal obligation to protect people’s personal data. What happens to that data in practice is what matters. For example, if an organisation wants to train an AI model on people’s social media posts, they must use any personal data lawfully, even if their end goal is to produce something innovative such as a Large Language Model.
MYTH #4: AI models themselves do not come with data protection risks
Some developers argued that AI models do not “store” personal data, implying that data protection does not apply. Our initial AI and data protection guidance published in 2020 has been clear that some AI models can contain the personal data they have been trained on in a form that allows for people to be identified. We welcomed a recent Opinion from the European Data Protection Board on this issue, and we will continue to improve our understanding of this complex area.
MYTH #5: AI development should be exempt from the law
FACT: There are no sweeping exemptions in the law for AI. Some of the respondents to our generative AI consultation argued that data protection law should not stand in the way of developing AI, and there must be some leniency from regulators for innovation to be possible.
As our Commissioner has said, data protection and AI go hand in hand - firms can’t expect to use AI in their products or services without first considering how they will make sure people's rights and freedoms are protected. People can be reassured that this is not voluntary ethics, or an optional afterthought, but a legal obligation for anyone developing or using AI.
I would argue data protection is actually paving the way for developing AI responsibly. Ultimately, development that does not reflect the complexity of the tech or its implications is not helping good players succeed and bad players reform in a competitive ecosystem that drives innovation and growth.
We play our role in supporting this ecosystem every day. Instead of giving organisations permission to cut corners, we are supporting them with plenty of practical advice so they can get this right. Our Innovation advice service can answer any burning questions while our Regulatory Sandbox means we can be there to support the development of new products. We also collaborate closely with our fellow regulators in the DRCF, making sure firms have access to joined-up, cross-regulatory support via the AI & Digital Hub.
MYTH #6: Existing regulation is not fit for cutting-edge tech like AI
FACT: We are one of several UK regulators with oversight of AI, but perhaps one of the few that can look all the way back to the design stage, making sure systems are not just used but also developed responsibly and safely.
Regardless of whether they are using the latest AI technology, organisations need to follow the same rules if they are using people’s personal data. Data protection is powerful because no matter the technology, it can be adapted to help firms carefully assess and mitigate any risks from their own use or development of AI. A fast-evolving technology like AI needs flexible legislation based on principles rather than more prescriptive approaches that may leave gaps or create barriers.
But we recognise we must respond quickly to provide certainty in exactly how the law applies. As AI is deployed in new ways or introduced into new sectors, it raises more questions for us to address. That’s why we’re regularly scanning the horizon and updating our position on emerging AI issues to plug these gaps.
Read more about our work prioritising AI this year here, and the misconceptions arising from our generative AI consultations here.
ICO Press Office
Information Commissioner's Office
pressoffice@ico.org.uk