- within Wealth Management, Family and Matrimonial, Media, Telecoms, IT and Entertainment topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
Artificial intelligence has arrived, transforming industries and economies around the world. But as models begin to equalise and the debate moves from adoption to integration and scaling across organisations, concerns around bias and fairness risk being compounded.
To mark the launch of the firm’s first-ever Hackathon, HSF Kramer joined forces with FFinc, an organisation dedicated to advancing equality in the workplace, to bring together HR professionals, Diversity and Inclusion practitioners, employment law experts, leading technologists, and thought leaders to discuss what AI means for the future of work, how it might recast the modern economy, and how to prevent fairness being undercut by innovation.
Human advantage in the age of AI
Our forum opened by challenging the perception of AI as something alien, with speakers emphasising the technology is a human construct built on decades of human data, decisions and values.
Our panellists stressed how opportunities to use AI to drive efficiency and elevate standards of excellence must be balanced with efforts to spot and remedy biased outcomes resulting from social inequities that may be inseparable from the data driving the technology.
As one speaker noted, artificial intelligence and human intelligence are fundamentally different, but the technology remains a reflection of ourselves.
AI and the future of work
The conversation moved to how technology has transformed work throughout history, with AI the latest innovation to reshape the economy. But, unlike previous technological leaps which have disrupted traditional blue-collar work, AI will also change the nature of white-collar roles.
However, while much attention is given to the potential for the technology to displace entire jobs, AI is currently taking on specific tasks within roles. So while all roles will likely be affected by automation to some degree, only a small number may be completely replaced by the technology. This trend is beginning to extend to non-routine tasks which have traditionally required creativity, empathy or judgement.
As huge amounts of resources continue to be poured into AI development, the best response for workplaces remains education to prevent alienation from a technology which has no clear finish line.
Digital literacy: Leaving no one behind
A major concern raised by panellists was digital exclusion. As our speaker warned, technology can be a bridge to opportunity, but not everyone can currently cross it.
Around the world, entire communities risk being excluded from a technology set to fundamentally alter the global economy. As one example of how groups may be left out, languages spoken by small numbers of people are not included in the data used by AI, with follow-on effects for how they can access and use the technology. Speakers urged leaders to be trained on how to introduce AI to younger generations and to focus on real-world challenges rather than abstract fears or optimism.
Digital inclusion, the panel concluded, must be intentional and seen as an opportunity, not a risk.
Bias, governance and the human impact of AI
Our discussion returned to the topic of embedded bias in AI systems. Examples included how algorithms can replicate gender inequality by making salary recommendations based on biased historical data, or by overlooking the quality of in-person interactions in performance reviews or when making decisions about layoffs.
But the problem is wider than just the outcomes of AI. One speaker noted that AI‑supported performance reviews can amplify existing bias if poorly designed. For example, where self‑assessments are weighted early in the review process, women’s tendency for lower self‑ratings can shape managers’ scores and be compounded throughout AI‑driven processes, embedding inequality through flawed design rather than intent.
Without understanding these pre-existing inequities, there is a risk of increased AI use exacerbating existing workplace disparities. Meanwhile, the panel also emphasised the need for organisations to develop frameworks that value soft skills and human context – areas where AI often falls short.
Robust governance emerged as the best response, with a focus on fixing problems in advance rather than after they've already caused harm. This is particularly true where organisations begin using AI more widely without a clear roadmap, meaning gaps can emerge between application and governance.
Human-in-the-loop: How people and culture decide whether AI succeeds
Our next topic discussed how responsible AI is as much about people and culture as it is the technology itself. Even individuals with strong AI literacy can overlook risks if the organisation's culture does not support responsible use.
AI transformation, one speaker argued, is at its heart a human and people-centric process which must include attitudes and beliefs as well as technical competency.
Organisations must craft AI strategies supporting both capability and confidence, which means understanding the attitudes of employees. Tools such as persona builders—used to map AI confidence and capability differences across employees—were noted as impactful, as they can help tailor training in AI adoption.
AI and sexism
One of the starkest warnings around AI discussed at the forum focused on the urgent challenges AI poses for women and girls. Despite promises of inclusion, women remain underrepresented among AI researchers and educators and received disproportionately low venture capital funding.
Our speaker also discussed the rise of deepfake imagery, which overwhelmingly targets women. Thoughtful regulation, designed to create needed guardrails without stymieing AI development, is essential to developing a technology which is not just powerful and profitable, but also ethical and inclusive.
What comes next
As organisations move from experimenting with AI to deploying it across core functions, the questions raised at the forum will only become more urgent. AI is no longer a future risk or opportunity to be managed in theory; it is already shaping decisions, assessments and outcomes in today’s workplaces. How decisions are made, how performance is measured, and whose values are embedded in systems, will increasingly determine whether AI delivers progress or entrenches inequality.
The discussion underscored that responsible AI is not a technical challenge alone. It is a legal, cultural and leadership issue — and one that demands action now. The choices organisations make at this stage will shape not only how AI is used, but who ultimately benefits from it.
The AI & DEI forum was a collaboration between HSF Kramer and FFinc, an association of women and businesses committed to creating opportunity: for underrepresented groups to thrive, business to change and everyone to move Forward, Faster. Through diverse programmes – including leadership accelerators, networking events, FFinc Tanks, hackathons, investment pitch days, and strategic partnerships – FFinc actively bridges the equality gaps that people face in business.
What does this mean for employers?
- Governance needs to move ahead of adoption: As AI tools begin to influence recruitment, pay, performance and redundancy decisions, employers need clarity on ownership, oversight and escalation. Governance frameworks should be in place before systems are deployed widely — including clear accountability for testing, monitoring and challenging outputs — rather than retrofitted after issues emerge.
- AI literacy is a leadership issue, and it starts with knowing where your people are: The discussion underscored that responsible use of AI depends on managers and decision‑makers understanding when to rely on tools, when to question them, and how to explain outcomes to employees. That requires leaders to have visibility over how confident, sceptical or enthusiastic different parts of the workforce are about using AI. Without that insight, organisations risk either pushing too fast and losing trust or moving so cautiously that opportunities are missed.
- Fairness has to be designed into workplace systems: Bias in AI does not arise only from data or code, but from the choices organisations make about how tools are used, what they measure, and what they overlook. Employers should be asking whether AI systems reinforce existing inequalities — for example in pay, progression or perceptions of performance — and whether human oversight is genuinely equipped to intervene when outcomes do not align with stated values.
Our line-up of speakers included:
- Economist and writer Daniel Susskind
- Former Executive Director of UN Women & Deputy President of South Africa Phumzile Mlambo-Ngcuka
- Sky's Group Director of Diversity & Inclusion David Carrigan
- Entrepreneur and advocate for workplace equality Dr Zara Nanu MBE
- AXA UK's Chief People Officer Amanda Vaughan
- Business psychologist and male allyship specialist Lee Chambers
- HSF Kramer Employment law partner Christine Young
- Activist, writer and speaker Laura Bates
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]