Feedback

AI in University HR: Key legal, ethical and practical considerations

23 April 2026      Emma Walton-Pond, Communications Officer

Artificial intelligence is rapidly transforming the way we do many things, and HR is no exception. While AI can offer significant efficiencies, for example in recruitment, staff management and workforce planning, it also creates legal and ethical risks that universities must manage carefully.

In this blog, we provide HR leaders with a practical overview of how AI can be used, the emerging risks, and the steps institutions can take to ensure responsible and compliant implementation.


Where can AI be used in university HR?

There are many ways that AI can be used in HR management, including:

Recruitment and selection

Many employers now use AI-driven tools to streamline recruitment, such as drafting job descriptions; screening CVs and online profiles; analysing video interviews for communication patterns, tone and content; and running automated background checks or salary benchmarking.

Performance, conduct and productivity

Employers are also increasingly using AI to measure performance and productivity; support decisions on promotion; improve performance; monitor attendance; analyse data for workforce planning, cost efficiencies, or pay equity; and to identify skill gaps, recommend training and understand workforce trends.

Although these tools can increase efficiency and reduce administrative burdens, they also raise questions about transparency, bias and the fairness of decisions influenced by algorithms.


Legal risks for universities

There is currently no dedicated domestic legislation governing the use of AI by employers and its potential impacts on staff. However, there are existing laws that will be highly relevant.

Discrimination and bias

Under the Equality Act 2010, AI can produce discriminatory outcomes if, for example:

  • Data used to train the AI model reflects historic or societal bias (e.g. gender or racial imbalance in past recruitment).
  • Algorithms embed the designer’s unconscious bias.
  • Disabled staff are disadvantaged by automated systems that do not accommodate reasonable adjustments.

Unfair dismissal

AI‑influenced decisions may be difficult to justify in an Employment Tribunal if managers cannot explain how an algorithm reached a particular conclusion. Note also that Article 22 of the GDPR gives employees the right to avoid decisions based solely on automated processing, and to request human review.

Data protection

Universities must meet GDPR requirements when using AI tools, including:

  • Identifying a lawful basis for processing.
  • Ensuring fairness, transparency and data minimisation.
  • Completing data protection impact assessments.
  • Applying strict security controls, particularly where special category data is used.

Human rights considerations

The use of AI tools may also interfere with employees' Article 8 rights under the European Convention on Human Rights (ECHR), which protects privacy. Any such interference must be lawful, necessary, and proportionate, with safeguards in place to protect employee rights.

 

Ethical considerations for universities

In addition to the legal issues, there are ethical considerations and universities will need to develop their own principles and policies for how and when they will use AI.

Appropriate use - Universities must consider not only whether AI can be used, but whether it should be used, particularly in sensitive areas like disciplinaries, grievances or performance management.

Transparency - Staff and job applicants should be told how and when AI is being used, especially where it influences decisions about them.

Reliability - AI can “hallucinate” incorrect information. Over‑reliance on AI outputs can undermine decision quality and expose institutions to legal risk.

Accountability - AI can obscure responsibility. Universities must be able to explain and justify decisions influenced by algorithms.

Workforce impact - AI may automate tasks previously done by junior staff, affecting career pipelines. HR will therefore need to plan for upskilling and new training pathways.


Managing AI risk in university HR

AI has many potential benefits, but universities need to stay abreast of the developments in a fast-moving area, be aware of the potential risks and take steps to mitigate those risks. Steps that universities can take include:

Testing and monitoring – Test AI tools by running them alongside human decision-making to identify discrepancies or discriminatory patterns.

Human oversight - Always retain a human decision‑maker, especially for high‑risk matters such as hiring, promotion, or dismissal.

Impact assessments - Carry out Equality Impact Assessments to assess discriminatory risks and Data Protection Impact Assessments to identify privacy risks.

Clear policies - Introduce dedicated AI policies and update other internal policies and procedures, such as recruitment policies, data protection policies, equality, diversity and inclusion frameworks and disciplinary and grievance procedures.

Training - Managers and HR teams will need training on how AI tools work and when human judgement must be prioritised.

Workforce consultation - Although not legally required, engaging with staff or unions can build trust and strengthen implementation.


AI in employee grievances

One area where we are seeing an increase in the use of AI by employees is in the drafting of grievances, which can cause significant problems for employers. Documents are often long, unclear and overly legalistic. Further, AI may introduce incorrect case law, staff may inadvertently feed confidential or sensitive data into public AI tools, and AI can give employees overly optimistic litigation prospects, complicating resolution.

In this situation, our advice would be to encourage informal resolution early; hold meetings to clarify concerns, rather than relying on AI‑generated text; summarise issues rather than responding point‑by‑point and train HR teams to recognise and manage AI‑generated content.


Conclusion

AI presents significant opportunities for universities but also substantial risks. By combining robust governance, human oversight, clear policies and ongoing staff training, universities can harness AI effectively while protecting employee rights and institutional integrity.



Shakespeare Martineau

Tom Long, Partner, Shakespeare Martineau

Tom.long@shma.co.uk

Susannah Nicholas, Professional Support Lawyer, Shakespeare Martineau

Susannah.nicholas@shma.co.uk



Important information

This article provides general guidance only and should not be treated as legal advice.




Read more



This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of the site and services and assist with our member communication efforts. Privacy Policy. Accept cookies Cookie Settings