Blog_Lead (36)
Simon Case
Simon Case Head of Data

Data, Gen AI Mon 5th August, 2024

3 AI regulation questions you need to address in 2024

AI might promise your organization the ability to cut costs, drive insight and boost innovation, but at what cost?

Poorly conceived AI projects carry enormous risk – whether that’s introducing bias to recruitment processes, producing incorrect advice to business leaders, or compromising the security and confidentiality of valuable business data. 

Governments are starting to introduce legislation to ensure AI is being used safely, and responsibly. And not before time – spending on AI software is expected to grow 50% faster than the wider software market, and one recent survey found that 71% of companies are looking to expand investment in AI and ML tools this year. 

What legislation is on the horizon? 

The EU AI Act is the world’s first major AI legislation, and was approved by the European Parliament in March 2024. In the US, meanwhile, several states are introducing legislation to govern the use and deployment of AI applications. 

The EU law will create new mandates for organizations to validate, monitor and audit their entire AI process, with fines of up to 35m euros or 7% of annual revenue, whichever is greater. 

What does AI legislation mean for my organization? 

Legislation isn’t necessarily bad news. There have been numerous examples of AI being deployed in ways that are potentially harmful – whether that’s creating chatbots that provide poor mental health advice, or a prototype that identified potential job candidates that were almost exclusively white men. Effective legislation should make AI less likely to cause harm. 

The issue facing CIOs isn’t whether AI legislation will happen – it’s a question of when. Taking a proactive approach to upcoming legislation by building compliance and governance into AI systems from day one, CIOs will reduce costly and potentially complex changes in future. Here are three AI issues we think should be at the top of your to-do list. 

1. Transparency 

Transparency is the practice and principle of making AI understandable and clear to humans. People should understand what information AI is using, and how it’s making decisions. The goal is to understand the ‘how’ and ‘why’ of AI, rather than just creating outputs. The key aims of future regulation are to: 

  • Eliminate bias in AI systems 
  • Provide clear user information 
  • Clearly label ‘deepfake’ content 
  • Ensure there are checks and balances in AI 

Legislation will require general purpose AI systems (and the models they’re based on) to comply with transparency requirements, such as publishing detailed summaries of the content used for training. The most powerful AIs will face additional regulations, and be required to perform model evaluations, risk assessment and mitigation, and incident reports. 

These rules could be difficult to comply with for organizations that don’t have extensive governance around their AI tools, or that don’t have a good handle on internal data. 

2. Content creation 

In August 2023, the US made a landmark AI copyright ruling, suggesting that AI-generated content cannot have copyright protection. Additionally, the new EU AI Act will require AI-created content to be labeled. 

Whether your organization is using AI to develop internal policies or external-facing marketing content, the risks are similar. Do you own the content being used to generate AI outputs and if not, do you have the necessary rights to use that content for this specific purpose? Does the content impact on anyone else’s copyright or likeness? 

Our belief is that this issue could be extremely contentious and business leaders should be setting clear policies around content creation now. 

3. High risk AI implementations 

Organizations that provide or deploy high-risk AI systems will be subject to significant regulatory obligations under the EU AI Act. 

These systems will be expected to meet a higher threshold of safety, with enhanced requirements around things like diligence, risk assessment and transparency. Examples of high-risk AI systems include critical infrastructure, education, healthcare, banking, and those that could influence elections. Some law enforcement and border control agency uses of AI will be regulated.

For CIOs, it’s important to consider whether any AI implementations might be classified as ‘high risk’ – for example, does the AI make a decision that has the potential to impact a consumer or service user in a significant way? If a system is high risk, are the appropriate controls and governance measures in place to meet future legislative requirements? 

Organizations using AI tools that create significant potential harm to health, safety, human rights, the environment, democracy, and the rule of law will also be regulated. In this case, organizations must conduct risk assessments, take steps to reduce risk, maintain use logs, comply with transparency requirements, and ensure human oversight. EU residents will have a right to submit complaints about high-risk AI systems and receive explanations about decisions.

The bottom line 

Governance is about far more than just documentation – you need to demonstrate that your organization has taken steps to understand and mitigate AI challenges. But taking a proactive approach to the safety and responsibility of AI deployments is certainly less of a headache than trying to apply governance principles to AI after the fact. 

This blog was written by Simon Case (Head of Data), Adam Fletcher (Data Scientist), and Lewis Crawford (Data and AI Service Lead).