Feedback

AI isn’t just supporting HR decisions. It’s challenging them.

22 April 2026      Emma Walton-Pond, Communications Officer

Using technology wisely and responsibly in job evaluation

AI is already embedded in HR and wider workplace systems, from workforce planning and performance management to decision support and documentation. These governed tools are designed to improve consistency, strengthen documentation and support structured decision-making.

But AI isn’t staying neatly within those systems. Employees are now using open-access AI tools to challenge decisions more quickly, more persuasively and often in more structured ways.

Grievances, appeals and pay-related cases are increasingly structured, policy-aligned and harder to dismiss. While it doesn’t mean they’re right, it does mean they’re more persuasive and demanding to deal with.

With easy access to tools that can quickly sharpen and iterate arguments, HR teams are already seeing a rise in formal challenges and tribunal claims, driven by greater awareness of rights and confidence in pursuing them (HR Connect: Employment Tribunals: Rising Claims).

The question for HR isn’t whether AI is being used. It’s whether organisations are prepared for how it is reshaping the nature of challenge.

Job evaluation is where this shift becomes most visible. Because it underpins pay, grading and equality, even small inconsistencies in reasoning or documentation are now easier to surface and scrutinise. Where explanations are unclear or comparisons are weak, decisions are more likely to be challenged effectively.


Not all AI is equal

Generative AI can produce fluent, confident and highly persuasive arguments. But in job evaluation, that presentation can be misleading.

Job evaluation frameworks such as HERA and FEDRA rely on structured scoring rules, based on extensive research, experience and consistent internal interpretation. In other words, things open AI tools simply don’t have access to.

So while an AI-generated argument might sound plausible, it isn’t grounded in the actual methodology of the scheme.

That distinction matters: clarity of language is not the same as validity of outcome.

This is why governed, internal AI tools are now being explored, not as decision-makers, but as structured quality checks within the evaluation process.


A more careful approach: AI as a second pair of eyes

This is where ECC’s approach is deliberately cautious.

Rather than handing decision-making to AI, the introduction of AI within its trusted HERA and FEDRA job evaluation schemes is being used as a structured second pass.

With HERA-AI and FEDRA-AI (currently in testing), the process crucially remains human-led:

  • A trained analyst completes the evaluation first
  • AI then reviews the evidence and produces indicative scoring and a structured rationale for consideration by the analyst
  • The human analyst remains accountable for the final decision

Used in a governed way, AI can support human judgement by acting as a sense-check on how decisions are evidenced. It provides a structured comparative layer within the job evaluation process, offering a consistent reference point for how the framework has been applied and helping make decisions easier to compare, document and explain when scrutinised.

The aim is not to prevent challenges, but to reduce the risk that they succeed because of weaknesses in process or reasoning.


This is really about governance, not technology

It is tempting to treat AI as a tool problem, as something to adopt or not adopt. But the real shift is in governance.

The role of governed AI in job evaluation is not to automate judgement. It is to strengthen the integrity of decision-making before it is exposed to external scrutiny.

As employees gain access to tools that can sharpen and structure workplace challenges, organisations need processes that are resilient under that pressure:

  • clearer reasoning → ensuring evidence is clearly linked to outcomes
  • stronger documentation → creating records that can withstand scrutiny
  • consistency checks → identifying variation before it becomes dispute
  • explainability → ensuring decisions can be clearly understood and defended

The organisations that adapt fastest will not necessarily face fewer challenges. But they will be better equipped to respond to them.


The bottom line

AI is influencing both sides of the employment relationship.

The shift is not just in how decisions are made, but in how they are examined, compared and challenged.

Employees do not need perfect arguments to create pressure. They need structured ones.

Organisations that treat AI as an efficiency tool risk falling behind, not in capability but in defensibility.

Because as the quality of challenge rises, the cost of weak reasoning rises with it.


ECC





Read more



This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of the site and services and assist with our member communication efforts. Privacy Policy. Accept cookies Cookie Settings