Browse our jobs and apply for your next role.
Here to help you fill your next vacancy.
Carolyn Moir-Grant
With over 30 years of experience at Allstaff, Carolyn has been a guiding force in shaping the agency’s reputation as a trusted recruitment partner.
There is a conversation happening in boardrooms and HR departments across Scotland right now. If AI can screen hundreds of CVs in seconds, shortlist candidates by matching keywords, and score applicants against predetermined criteria why do we still need human recruiters at all?
It is a fair question. And it deserves a considered answer.
After more than 40 years placing people into roles across manufacturing, logistics, engineering, finance, HR and marketing, my view is this: AI has a legitimate role in the hiring process, but it is a supporting role. It can handle volume and administration effectively. What it cannot do is evaluate a human being and the moment organisations hand it that responsibility, the quality of their hiring decisions declines. This article explains why, and what a more balanced, human-led approach looks like in practice.
Most AI recruitment platforms are designed to solve an administrative problem: volume. When a job advert attracts 300 applications, a hiring manager cannot give each one meaningful attention. AI tools step in to filter, sort and rank – reducing that field to a manageable shortlist based on keyword matching, skills scoring, and pattern recognition drawn from previous hiring data. That is a legitimate use of automation, and we work with employers who use these tools sensibly as a first filter.
The problem comes when organisations mistake that filter for a decision. Shortlisting is not the same as evaluation. The criteria an algorithm uses to rank candidates are not neutral – they are a reflection of past decisions, encoded into a system that will keep making the same choices at scale, including the flawed ones.
The deeper limitation is structural. AI systems can only measure what can be quantified, which means they work by reducing the full complexity of a human being – their potential, their character, their way of working with others – into data points that can be scored and ranked. What gets left out in that reduction is often exactly what determines whether a hire succeeds or fails.
We hear about it repeatedly. A candidate with an exceptional CV joins a team and is gone within six months because the cultural fit was not there. Meanwhile, a candidate who would have been screened out by an algorithm perhaps because their career path was unconventional – would have thrived.
The qualities that most reliably predict long-term success are, almost by definition, the hardest to quantify:
These qualities only become visible in conversation in the moments of candour that emerge when a recruiter has built enough trust for a candidate to be genuinely honest rather than simply impressive. No AI system can create that environment, and none can read what is not said.
One of the most common arguments for AI in recruitment is that it removes human bias. The appeal is understandable. The reality is considerably more complicated.
AI systems are trained on historical data – typically the hiring decisions made by the same human beings whose bias we are trying to eliminate. If an organisation has historically hired a particular type of person for a particular role, the AI learns from that pattern and replicates it. It does not question it. It optimises for it.
The most documented example is Amazon’s AI recruitment tool, quietly shelved in 2018 after it emerged the system had taught itself to penalise CVs containing the word “women’s” — because the training data reflected a decade of male-dominated tech hiring. The bias was not introduced by the AI. It was inherited from the humans who came before it, then systematised and scaled. While tools have become more sophisticated since then, the underlying risk remains: AI is a mirror, not a crystal ball
There is an important distinction between consistency and fairness that gets lost in this conversation. An AI system can be entirely consistent applying the same criteria to every candidate every time while still being deeply unfair, because the criteria themselves encode historical inequity. The subjectivity does not disappear when humans hand decisions to algorithms. It simply becomes invisible, embedded in a system that appears neutral because it does not visibly deliberate. That invisibility is arguably more dangerous than overt human bias, because it is harder to challenge.
It is also worth noting that employers retain full legal responsibility for hiring outcomes under the Equality Act 2010, regardless of whether decisions were made by a person or a platform. Human oversight is not optional — it is a legal and ethical requirement.
Experienced recruiters often describe a quality of knowing – a read of a candidate that goes beyond what is on the CV. What they are doing is processing a vast range of signals simultaneously: the consistency between what a candidate says and how they say it, subtle shifts in energy when certain topics arise, the quality of questions a candidate asks, the way they talk about previous colleagues and managers. These are not random impressions. They are data — just data that cannot be captured in a scoring matrix.
Culture fit is one of the most important factors in whether a hire succeeds or fails, and one of the most poorly served by technology. Understanding whether a candidate will genuinely work well within a specific team, with a specific manager, in a specific organisational culture, requires deep familiarity with both the person and the environment. None of that lives in a dataset. It lives in the relationships and conversations that good recruiters build with the employers they work with over time.
This is why we invest as much in understanding our clients’ businesses as we do in assessing candidates. Our guide on bad hire warning signs covers many of the early indicators that a hire is not working — most of which come back to culture and fit rather than skills or qualifications.
Research consistently shows that the following qualities are stronger predictors of long-term performance than technical qualifications alone:
These are also precisely the qualities AI systems are least equipped to assess. The long-term cost of underweighting them is visible in turnover, in team performance, and in the cultural damage a wrong hire can cause.
Recruitment asks people to make themselves vulnerable – to present themselves honestly in the hope of being chosen. That process requires trust. Trust is built through human connection: through a recruiter who takes time to understand a candidate’s circumstances, through an employer who explains honestly what a role really involves, through the small moments of genuine engagement that make a stressful process feel manageable.
At Allstaff, we believe that while AI is built to find patterns, only people can find potential; that’s why we focus on the human story behind the CV to ensure every placement isn’t just a match on paper, but a success in practice
Automating these touchpoints does not just reduce the candidate experience. It sends a signal about how an organisation values people before they have even joined. In a competitive talent market, the strongest candidates – those with options – make choices based on how they feel about an organisation from the very first interaction.
There is also the question of authenticity. AI-driven interview platforms produce optimised responses. Candidates quickly learn to perform for the algorithm – using certain language, maintaining certain eye contact patterns, projecting certain emotional signals. What gets measured is not the candidate. It is their ability to present themselves in a way the algorithm rewards. Effective candidate evaluation requires the opposite: seeing behind the preparation, not rewarding it – at Allstaff, we look for the substance, not just the performance.
None of this is an argument against using technology. AI has a genuinely useful role in recruitment – managing application volumes, automating scheduling, surfacing candidate profiles, analysing labour market data to inform workforce planning. The principle we work to at Allstaff is straightforward: AI handles efficiency. Humans make decisions.
A human-led recruitment process means ensuring human insight is present at every genuinely consequential decision point. In practice, that means:
As AI capabilities advance, there is a widespread assumption that the role of human judgement in recruitment will shrink. Our view is the opposite. The more sophisticated AI becomes at processing measurable criteria, the more valuable the ability to assess what those tools cannot measure will become. Experienced recruiters, working in genuine partnership with the businesses they support, will become more strategically important as the administrative layers of the process are automated — not less. It is, in many ways, what we have always believed good recruitment should be.
If you are reviewing your approach to hiring and wondering where to draw the line between technology and human judgement, we would welcome a conversation. We work with employers across Glasgow, Paisley and the wider Central Belt to build recruitment processes that find the right people – not just the most algorithm-friendly ones.
Can AI be used responsibly in recruitment? Yes — when used appropriately. AI is well-suited to administrative tasks such as managing application volumes, scheduling, and initial screening for clearly defined minimum criteria. The key is ensuring that evaluative decisions — those that genuinely determine who gets hired — remain in human hands. AI should support the process, not determine its outcomes.
What are the biggest risks of relying too heavily on AI in hiring? The three most significant risks are: reduced quality of hire due to the inability of AI to assess soft skills, culture fit and human potential; algorithmic bias that encodes and scales historical discrimination; and a candidate experience that disengages the strongest applicants before they reach interview stage.
Does using a recruitment agency mean less reliance on AI? Not automatically — it depends on the agency. At Allstaff, our approach is built around human assessment. We meet every candidate we represent. Our consultants develop genuine knowledge of the businesses we work with. The judgements we make are informed by experience and direct conversation, not algorithmic scoring.
How do I know if my current hiring process relies too much on AI? Some useful questions to ask: At what point does a human first meaningfully engage with a candidate’s application? Are hiring decisions ever made based primarily on system-generated scores? Do candidates routinely report that the recruitment process felt impersonal? If the answer to any of these gives you pause, it may be worth reviewing where human judgement sits in your process.
What is the legal position on AI in recruitment in the UK? Employers remain legally responsible for the outcomes of their hiring processes under the Equality Act 2010, regardless of whether those decisions were made by a person or an automated system. The use of AI does not transfer legal liability — it places additional responsibility on employers to audit their systems for discriminatory outcomes and ensure meaningful human oversight at decision points.
Will AI replace recruitment consultants? Our view — based on experience rather than optimism — is that it will not. The qualities that make recruitment effective: human insight, contextual judgement, the ability to build genuine relationships with both employers and candidates, and the capacity to make nuanced decisions about fit and potential — are not things AI systems are close to replicating. What AI will do is change how recruiters spend their time, taking more of the administrative load and freeing up space for the work that genuinely requires human capability.