Esther Boffey, Diversity & Inclusion Leader, Stanton House
Diversity, Equity and Inclusion (DEI) is facing a challenging moment in the UK workplace. Recent research shows that 35% of organisations have reduced their DEI budgets in 2024, while over a third admit they have no plans to prioritise DEI over the next five years. Even where programmes exist, a quarter of employers describe their DEI efforts as entirely reactive, rather than part of a clear, strategic plan. I’ve personally seen this decline as well by the way organisations are marking events like International Women’s Day, Pride Month and last year’s Black History Month.
There are many reasons behind this erosion, from political and economic pressures to fatigue with “performative” activity. On the one hand, the reduced volume of superficial social posts could mean that companies are moving away from virtue signalling.
However, on the other hand, the lack of sustained investment and strategy raises concerns about whether we’re still making progress in DEI, and indeed not going backwards.
At Stanton House, we believe it is important to continue engaging with DEI in a purposeful and strategic way, connected to the challenges of today’s workplace. We’ve recently been looking into one of the most pressing challenges, and biggest opportunity of our current time, the rise of AI. As businesses work rapidly to embed emerging technologies, questions of fairness, transparency and bias are unavoidable.
That’s why this Black History Month, we’re looking specifically at AI through the lens of inclusion for Black British employees in office roles. By exploring how AI can both support and undermine equity in hiring, we aim to spark practical conversations about what inclusion means in a tech-driven workplace.
Why is this important?
Black British candidates continue to face challenges in today’s labour market.
· Black British people are still faced with lower employment rates (69%)
· Black British graduates have one of the lowest high-skilled employment rates at 58.7%, compared to 69.5% for White graduates
· A large number of studies show persistent name and ethnicity-based discrimination in recruitment.
· A survey found that 88% of Black British respondents had experienced racism in the workplace
AI is currently used widely in hiring processes, including with CV screening, automating job-ads, chatbots doing initial scheduling and Q&A and using analytics to scrutinise video interviews.
Research has shown that AI poses a risk when it comes to fairness and bias in the hiring process. There are number of risks that are important to note:
1. Proxy Discrimination – even if an AI system doesn’t look at race directly, it can still pick up on clues that are linked to race, such as where someone lives, what school they went to, or the hobbies they list. These “proxies” can influence the system’s decisions and end up unfairly pushing some CVs down the ranking.
2. Amplification of historical bias – this is a challenging issue where a model will pick up a hiring pattern from the past. Given that Black candidates are under-hired historically, a model can simply ‘prefer’ CVs of candidates that look like past hires.
3. Gaps in transparency – there is currently a lack of transparency when it comes to how AI tools make decisions. This means candidates may be rejected, but recruiters can’t see the reasons why. This makes it very hard to spot or fix unfair outcomes.
What can we do to mitigate these risks?
One of the most important steps is to track conversion rates at every stage of the hiring process, from application, to shortlist, to interview, to offer, to acceptance, and finally retention. We’ve kept track of this at Stanton House, and it really helped us to identify where Black, or minority ethnic candidates may be dropping out of the funnel, and what action needs to be taken.
When using AI tools, it is important to check your external providers carefully. They should be transparent about how they work, and they should be able to share with you what they’ve done to ensure fairness and reduce bias. You’d want to be open to your candidates as well as to how you’re using AI in your hiring process.
Some firms experiment with anonymised or “name-blind” CV screening, where personal details like names are hidden. This can reduce certain types of bias, but it isn’t perfect and thus needs to be monitored rather than be introduced as a standalone solution.
Finally, as our Co-Founder Nick Eaves describes in our most recent Outspoken article, AI is nothing without the human process. It should be used to support decisions, not replace decision making processes. Every automated rejection should be reviewed, and there needs to be certainty that no promising candidates are being unfairly excluded.
Black History Month reminds us that inclusion cannot be left to chance. As AI reshapes recruitment, we have a choice: allow it to repeat the biases of the past or use it to build fairer pathways for Black British talent. At Stanton House, we are committed to the latter: Using data, transparency and human judgement to make hiring both smarter and more inclusive.