To date, the UK government has adopted a “pro innovation” approach to AI regulation, refraining from legislation. This has been with a view to enable the UK to keep pace with rapid developments in AI. However, this looks set to change with the recent publication of a first draft Artificial Intelligence (Regulation and Employment Rights) Bill (“the Bill”), potentially marking the starting point for more formal regulation, particularly in relation to workplace decision making by AI. This blog explores what the Bill proposes by way of regulation, and some practical tips for what employers can be doing now.
What is the Artificial Intelligence (Regulation and Employment Rights) Bill?
Concerned that “employment law is failing to keep pace with the rapid speed of technological change”, and on the back of an April 2024 survey suggesting that over 70 per cent of working adults oppose AI being used to make decisions relating to hiring, firing, performance management or bonus related decisions, as well as widespread concerns of inherent bias or risk of discrimination in AI decision making, the Trades Union Congress (“TUC”), commissioned a taskforce, made up of members from a variety of sectors (law, technology, politics, HR, trade union and voluntary) to prepare draft legislation proposing what a regulatory framework could look like.
The Bill is the outcome of that process and seeks to regulate employers’ use of AI systems as well as promoting the development of safe, secure and fair AI systems in the employment field.
What are the key features of the Bill?
1. It applies to “high risk” decision making: Most obligations under the Bill are engaged when an employer makes a “high-risk” decision using AI. “High-risk” means any decision that has the capacity or potential to produce (a) legal effects concerning jobseekers, workers or employees, or (b) other similarly significant effects. The Bill lists a range of activities that are deemed “high-risk”, including but not limited to:
- The process of recruitment;
- The setting of wages and other terms and conditions of employment;
- Steps in relation to disciplinary matters; and
- Steps in relation to the termination of employment.
2. Protection against discrimination: The Bill amends and extends existing rights under the Equality Act 2010 and the restrictions under the UK General Data Protection Regulation (“UK
GDPR”) for automated decision-making to the use of AI systems by employers in relation to jobseekers, workers and employees. The amendments include that the burden of proof is on the employer to demonstrate that systems are not discriminatory in order to avoid liability. Employers will be able to discharge liability for AI-powered discriminatory decisions, if they can show they (i) did not create or modify the AI system; (ii) audited the AI system for discrimination at each stage; and (iii) put procedural safeguards in place to remove the risk of discrimination.
3. Prohibition on the detrimental use of emotion recognition technology: This is technology that deploys biometric data, such as facial expressions or tone of voice, for the purpose of identifying or inferring the emotions or intentions of natural persons.
4. Obligation to perform workplace AI risk assessments before making any “high-risk” decisions. The assessment would need to address risks relating to human rights, health and safety and data protection.
5. Obligation to consult with employees, workers and trade unions before “high-risk” AI systems are introduced, as well as on an ongoing 12-month basis. This right would reflect the existing collective redundancy consultation obligations, providing all parties with a right to human review of AI decision-making, and access to information about how the AI system works.
6. Creating a record of information relating to AI systems used when making “high-risk” decisions: Employers would need to establish and maintain a register of information about the AI systems used in high-risk decision-making, with individuals having a right to a personalised statement explaining how AI was used to make high-risk decisions about them.
7. Importance of human review: Employees, workers or jobseekers would be entitled to a right to human reconsideration of any high-risk decision.
8. Right to disconnect from emails outside their contracted hours: The draft contains this as a statutory right, as a way to mitigate against AI related work intensification.
9. Automatic unfair dismissal where it is linked to an unfair reliance on “high-risk” decision-making or is used as a punishment because an employee has exercised their right to disconnect.
10. Guidance and access to data: The Bill contains a right for trade unions to be given data about union members that is being used in relation to AI decision-making in the workplace. In addition, Acas would be obliged to prepare guidance on AI and data at work.
What does the future hold for regulation of AI in the UK?
The Bill is not law yet and it is currently unclear if it will be progressed. Much will depend on the outcome of the general election on 4 July 2024. The Bill also competes with another UK Artificial Intelligence (Regulation) Bill, a private members’ bill that received a third reading in the House of Lords. It contains principles-based regulation of AI to ensure transparency, equality, and fairness but with a proportionality approach to risks and benefits of AI systems to allow the UK to be competitive in the international market.
Whilst the current Conservative Party initially ruled out any imminent plans to legislate to regulate AI in the workplace, on questioning in April 2024, said it would “legislate in due course once the risks were fully understood”.
To date, the Labour Party has revealed little on its position on AI regulation, saying it would “work with workers and their trade unions, employers and experts to examine what AI and new technologies mean for work, jobs and skills and how to promote best practice in safeguarding against the invasion of privacy through surveillance technology, spyware and discriminatory algorithmic decision making.”
Even if the government decided to progress legislation in this area after the election, the Bill is unlikely to be enacted as currently drafted. For example, on the right to disconnect point, which is not AI specific, the Labour Party proposes to enact a ‘right to switch off’, but this does not go as far as the Bill envisages, instead enabling workers and employers to agree bespoke mutually beneficial contractual terms or policies. If you’d like to read more about the Labour Party’s recently proposed employment reforms, please see our in-depth publication on this topic.
However, the Bill does serve as a framework from which discussions and subsequent policy direction can be developed.
What should employers be doing now?
Despite the lack of AI regulation in the UK at the moment, employers should nevertheless start preparing for the increasing use of AI in the workplace. Our top tips are:
- Monitor developments in sector specific and general AI regulation to understand the direction of travel.
- Establish an AI committee to take ownership of AI in the workplace, including setting policy and monitoring compliance.
- Have an AI policy on use of AI in the workplace, covering parameters for workers on what is and is not permitted use, who to report infringements to, and the implications of non-compliance.
- Carry out risk assessments/audits of AI systems used so that there is a proper understanding of how these have been put together and work, particularly what is taken into account when AI is making decisions. This is a legal requirement under the UK GDPR and auditing is a likely feature of any future regulation in this area (and the Bill proposes a statutory defence if employers have done a proper audit).
- Having a human in the loop is key; decision making should not be entirely automatic, so ensure any decision making is subject to human review and intervention, with a human available for an individual to make a challenge to.