Back to Trust Centre

Responsible AI

Epitome's responsible AI position is built around a simple operating rule: AI may assist with interpretation, analysis, matching, and prioritisation, but consequential workforce decisions should remain subject to human review, documented reasoning, and organisational accountability.

How Epitome uses AI

Epitome's current AI governance material describes AI-supported use cases such as skills inference, competency mapping, gap analysis, candidate ranking, career pathing, and workforce analytics Available now. These capabilities are intended to help users see patterns, understand fit, prioritise action, and interrogate the basis for recommendations.

Epitome uses AI to support clearer, more consistent, and more explainable decision support across workforce workflows. It does not present AI as a replacement for human judgement.

AI principles

Epitome's current governance draft is built around six principles Available now:

  • Fairness
  • Transparency
  • Human oversight
  • Accountability
  • Privacy by design
  • Continuous improvement

These principles matter because they translate AI trust into operating expectations. Epitome's position is that AI outputs should be understandable, challengeable, reviewable, and bounded by clear governance rather than treated as opaque authority.

EU AI Act and employment use cases

The EU AI Act uses a risk-based model that is especially relevant in employment and workforce settings. The Act entered into force on 1 August 2024. Prohibited AI practices have applied since 2 February 2025, and high-risk obligations for employment-related systems are currently expected from 2 August 2026, although the timetable may still be affected by subsequent EU legislative developments Available now.

For Epitome, the practical point is that employment-related AI must be assessed not only as a technical feature, but in the context in which it is used. The relevant scope is broad and can extend beyond employees to candidates, contractors, trainees, and other work-related relationships Available now.

Prohibited uses

Epitome's trust position is that certain AI uses should not form part of the platform's employment or workforce workflows. Epitome does not permit the use of its platform for:

  • biometric emotion recognition of employees, candidates, contractors, or other workers Available now
  • webcam, video, voice, or similar biometric analysis used to infer emotions or intentions in the workplace Available now
  • biometric categorisation used to infer race, religion, political views, trade union membership, sex life, or sexual orientation Available now
  • social scoring in employment or workforce decisions Available now
  • manipulative, deceptive, or exploitative AI techniques designed to distort behaviour or impair informed decision-making in employment contexts Available now

These exclusions are intended to align Epitome's product posture with prohibited-practice concerns under the EU AI Act and related fundamental-rights expectations Available now.

High-risk employment uses

Many employment-related AI systems are likely to be treated as high-risk under the EU AI Act when they are used for recruitment, selection, decisions affecting work-related relationships, task allocation based on individual traits or behaviour, or monitoring and evaluation of workers Available now.

For Epitome, this means that the legal classification of a feature may depend not only on what the feature does technically, but also on how the client deploys it in practice. Examples of uses that may fall into the high-risk category include:

  • candidate ranking or filtering used in recruitment or selection Supported with configuration
  • skills or competency outputs used to make promotion, progression, reward, or termination decisions Supported with configuration
  • workforce analytics or behavioural insights used to monitor or evaluate individual workers Supported with configuration
  • role-fit, matching, or allocation outputs used to materially influence access to work opportunities or task allocation Supported with configuration

High-risk does not mean prohibited. It does mean that the feature requires tighter governance, stronger documentation, and careful deployment controls.

What data Epitome uses

Current material states that Epitome's AI operates on:

  • direct candidate or user input, such as assessment responses, profile information, or uploaded CVs Available now
  • client-provided data, such as employee records, role definitions, or organisational structures Available now
  • public reference frameworks, such as ESCO, O*NET, SkillsFuture, and similar classification or skills frameworks Available now
  • aggregated market or benchmark inputs where they are anonymised and used at a summary level Available now

The important trust-centre point is that Epitome positions these sources as explicit and attributable. Buyers should be able to understand the general categories of data used by the system rather than being asked to trust an undefined external model.

What data Epitome does not use

Epitome is equally clear about what it does not do. Epitome states that it does not:

  • scrape social media platforms for candidate or employee data Available now
  • purchase candidate data from third-party brokers or aggregators Available now
  • use browsing history or web activity as an assessment input Available now
  • infer personal characteristics from IP or location data Available now
  • collect biometric data such as facial recognition or voice analysis for these workflows Available now
  • infer protected characteristics from names, photos, or similar proxies Available now

This boundary is central to Epitome's trust position and is made explicit here so buyers can understand the design choices behind the platform.

Explainability and confidence

Epitome's current draft positions explainability as a baseline requirement rather than an optional extra Available now. The platform is intended to show the reasoning behind outputs such as skills gaps, rankings, and recommendations, including the factors considered, their relative contribution, and the relevant data sources or benchmarks.

The draft also refers to confidence levels and limitations being shown where applicable Available now. In trust-centre language, that means Epitome should explain not only why a result was produced, but also where uncertainty remains and where a user should apply additional judgement.

Human-in-the-loop controls

The governance draft clearly states that Epitome is designed as a decision-support system, not a decision-automation system Available now. In practice:

  • hiring recommendations are intended to support recruiters or hiring managers, not replace them Available now
  • development and skills-gap outputs are meant to inform planning, not dictate action Available now
  • authorised users are expected to be able to override or adjust AI recommendations, with rationale captured through an audit trail Available now

Where clients require specific escalation, approval, or override flows, those can be implemented within the product and process design Supported with configuration.

When a feature may not be high-risk

Not every AI-enabled workflow in employment will automatically be high-risk. Under the EU AI Act, some systems may fall outside the high-risk category where they perform a narrow procedural or preparatory task and do not materially influence the outcome of decision-making Available now.

In Epitome's context, examples could include administrative support features that help organise workflows, surface information, or improve the consistency of a previously completed human activity without ranking, filtering, replacing, or materially influencing the underlying employment decision Supported with configuration.

That boundary matters. A scheduling or workflow assistant may be lower risk, while a ranking or scoring tool used in selection may be high-risk. Epitome therefore expects features to be assessed according to both technical function and actual use.

Fairness and bias monitoring

Epitome's current AI governance material states that protected characteristics are excluded from decision factors and that fairness monitoring can be performed where the relevant demographic data is available Available now. The draft specifically references adverse-impact monitoring, including 4/5ths-rule analysis, statistical testing, and feedback channels for potential concerns Available now.

Epitome's governance position is not limited to fairness alone. In employment contexts, AI review should also take into account fundamental rights, health, safety, dignity, autonomy, and the risk of materially influencing work-related outcomes without appropriate oversight Available now.

Epitome's fairness and rights position is designed to be clear but disciplined:

  • Epitome aims to design AI outputs so that protected characteristics do not drive recommendations Available now
  • fairness analysis depends in part on the lawful collection and availability of demographic data Supported with configuration
  • independent bias-audit support, data exports, and audit-ready documentation are Available on request
  • more advanced or intersectional analysis is described as In progress in the current roadmap

Model governance and review

The AI draft describes a lifecycle covering design, development, validation, deployment, and operations Available now. It also refers to model documentation, review requirements, and a model inventory covering use case, inputs, oversight requirements, and limitations Available now.

Epitome intends to document what each model or AI capability is for, what inputs it relies on, how it is reviewed, what limitations are known, and who owns it. More detailed model cards and supporting documentation are Available on request.

Provider and deployer responsibilities

Responsible AI in employment and workforce contexts is shared work. Epitome can document intended purpose, feature boundaries, explanations, technical controls, and audit support. Clients remain responsible for how they deploy the system in practice, including lawful basis, notices, final employment decisions, and the collection of demographic data where fairness analysis is required.

This distinction matters under the EU AI Act. A provider may classify and document a system for a given intended purpose, but a deployer remains responsible for how that system is used in a real employment context Available now. If a feature is used outside its intended purpose, or in a way that materially influences employment outcomes beyond the documented design, the client's obligations and risk profile may change Available now.

Epitome therefore provides explainable tools, governance controls, intended-purpose documentation, and compliance-support features, while the client remains responsible for its own legal obligations, policy choices, and employment decisions Available now.