Carolyn
Written By:

Carolyn Moir-Grant

With over 30 years of experience at Allstaff, Carolyn has been a guiding force in shaping the agency’s reputation as a trusted recruitment partner.

Author Bio

AI experience has quietly become one of the most contested criteria in modern hiring. Some employers are adding it to every job description regardless of the role. Others are ignoring it entirely, unsure whether it genuinely matters or whether they are simply chasing a trend. The honest answer sits somewhere in the middle. In 2026, we are seeing a shift away from chasing ‘AI experts’ and toward building AI-literate teams  where the focus is on the role, the organisation, and what you actually need from the person you hire.  This article sets out a practical framework for making that judgement call well.

Is AI Experience Actually Relevant to the Role You Are Hiring For?

The first question to ask is not whether AI is important in general. It is whether AI competence will meaningfully affect how well someone does this specific job.

For some roles, the answer is clearly yes. A marketing manager who cannot work with AI content or analytics tools, a finance analyst with no familiarity with AI-assisted modelling, or an operations coordinator in a logistics environment where AI-driven scheduling is standard — in these cases, assessing AI experience is a legitimate and necessary part of the hiring process.

For many other roles, the answer is more nuanced. General technology adaptability — the ability and willingness to learn new tools — matters far more than current familiarity with any specific platform. Asking detailed AI experience questions for roles where the technology is peripheral may introduce unnecessary barriers and narrow your candidate pool without improving your hiring outcomes.

The practical test is straightforward: map the AI tools and capabilities that are genuinely central to the role’s core responsibilities. If the list is substantial and the learning curve would be steep, AI experience warrants specific assessment. If the tools involved are limited or straightforward to learn, technology adaptability is the more useful quality to evaluate.

Avoid the trap of asking about AI experience simply because it sounds forward-thinking. It should serve the role — not signal that the organisation is keeping up with the times.

How Do You Ask About AI Experience Without Creating Unfair Barriers?

If you have established that AI experience is genuinely relevant, the next challenge is assessing it fairly — and this matters more than it might initially appear.

AI adoption has not been uniform across industries, regions, or types of organisation. A candidate from a large corporate environment may have had daily exposure to sophisticated AI tools. A candidate from a smaller business, a public sector role, or a sector with lower AI penetration may have had very little — not because they lack curiosity or capability, but because the opportunity was not there. Filtering purely on current AI proficiency risks systematically disadvantaging strong candidates from these backgrounds.

The more useful questions to ask — both in the application process and at interview — are:

  • Not just what AI tools have you used, but how do you approach learning new technology, and can you give me an example?
  • Not just are you familiar with this platform, but how have you evaluated whether an AI tool was right for a particular task?
  • Not just do you use AI in your work, but where have you found AI most useful, and where has it fallen short?

This approach surfaces the learning agility that will actually determine how quickly someone becomes effective, rather than simply reflecting the environments they happened to work in previously.

For roles where specific tools are central, a practical task-based assessment is more reliable than a self-reported answer. Asking a candidate to work through a realistic scenario using a relevant tool tells you far more than a CV that lists it as a skill. Self-reported AI experience has become increasingly difficult to calibrate — it ranges from candidates who use AI daily in sophisticated ways to those who have opened ChatGPT twice and consider themselves proficient.

 

What Does Genuine AI Competence Actually Look Like?

One of the most useful skills a hiring manager can develop right now is the ability to distinguish between candidates who genuinely understand AI tools and those who have absorbed the language without the substance.

Strong AI competence typically looks like this:

  • The candidate can name specific tools they have used and explain why they chose them for particular tasks
  • They speak clearly about limitations they encountered and how they worked around them
  • They understand that AI output requires human judgement — they are not passive consumers of whatever the tool produces
  • They demonstrate curiosity about how the technology is developing and can connect that to their own professional context
  • They can describe concrete outcomes — time saved, quality improved, problems solved — rather than speaking in generalities

Surface-level familiarity looks quite different:

  • The ‘Magic Button’ Fallacy: A belief that AI is a shortcut to avoid deep thinking, rather than a tool to facilitate it.
  • Broad claims — I use AI extensively in my work — without specificity about which tools, in what context, or to what effect
  • Enthusiasm without critical awareness, and an absence of any acknowledgement that AI tools are imperfect
  • An inability to describe a situation where AI did not work as expected, or where they chose not to use it

The highest-value candidates are those who combine genuine AI competence with strong domain expertise. Someone who understands both the tools and the professional context in which they are being applied will always outperform someone who is technically proficient but lacks the judgement to know when to trust AI output and when to override it. The highest-value candidates are those who understand they are the ‘human-in-the-loop — they use AI to enhance their work but never let it abdicate their responsibility for the final output. This is the same principle we explored in our piece on why human judgement remains irreplaceable in hiring: it applies as much to the people you hire as it does to the process you use to find them.”

How Do You Weigh AI Experience Against Everything Else?

Even when AI experience is genuinely relevant, it should sit as one dimension of a candidate’s overall profile — not the defining criterion.

The risk of over-indexing on AI experience is real. Hiring managers who weight it too heavily can filter out candidates with exceptional domain expertise, strong cultural fit, and high long-term potential, in favour of candidates whose AI proficiency is current but whose broader capability is limited. A candidate who scores well on AI competence but poorly on communication, judgement and team fit will, in most roles, underperform relative to someone whose AI knowledge is still developing but who brings everything else the role requires.

The more productive frame is to ask two questions about every candidate:

  • Is their current AI experience sufficient for the role as it exists today?
  • Do they demonstrate the curiosity and adaptability to develop further as the tools evolve?

For senior hires, current proficiency matters more. For early-career candidates, learning agility and genuine curiosity are stronger and more durable signals than familiarity with any specific tool that may be superseded within two years.

It is also worth building AI development into your onboarding and ongoing professional development framework rather than treating it purely as a pre-hire filter. Candidates who have proactively pursued AI knowledge outside formal employment — through self-directed learning, personal projects, or professional development — often demonstrate higher initiative and adaptability than those who simply encountered the tools in a previous role. That kind of self-driven curiosity is worth recognising in the hiring process.

What Do You Risk by Not Asking at All?

In roles where AI competence genuinely matters, omitting it from the hiring process creates risks that compound over time:

  • An immediate skills gap that only becomes visible after the hire is made — when a new team member struggles with tools the rest of the team relies on, or requires a level of training that was not anticipated
  • A long-term workforce planning problem — organisations that consistently fail to factor AI awareness into hiring decisions will find their teams falling progressively further behind in adoption, not because the tools are unavailable but because the people hired are not equipped or motivated to use them. This is particularly relevant given Scotland’s AI Strategy 2026-2031, which focuses on ‘trustworthy and inclusive’ growth. For businesses in the Central Belt, staying competitive means hiring people who can navigate these tools responsibly.”
  • A signal to strong candidates — those who are actively developing their AI capabilities will notice when an employer shows no interest in those skills, and some will draw conclusions about the organisation’s ambition and direction from that absence

The goal is not to make AI experience a prerequisite for every hire. It is to approach it with the same clarity and intentionality you bring to every other hiring criterion — asking whether it matters, how much it matters, and how to assess it in a way that is fair, relevant, and genuinely predictive of success in the role.

If you would like to talk through how to approach AI competence in your recruitment process, we work with employers across Glasgow, Paisley and the wider Central Belt and would welcome a conversation.

Frequently Asked Questions

Should every job description mention AI experience? No — and adding it indiscriminately can do more harm than good. AI experience should appear in a job description when it is genuinely central to the role’s core responsibilities. Including it as a default signal of modernity narrows your candidate pool without improving hiring outcomes, and may deter strong candidates who have the capability to develop AI skills quickly but lack current exposure.

What is the best way to assess AI competence at interview? Behavioural questions that require candidates to draw on real experience are more reliable than theoretical questions. Ask candidates to describe a specific situation where they used an AI tool, what it achieved, and where its limitations became apparent. For roles where specific tools are central, a practical task-based assessment will tell you more than any self-reported answer.

How do AI experience expectations differ between junior and senior candidates? For senior hires, current AI proficiency in role-relevant tools is a reasonable expectation. For early-career candidates, learning agility, genuine curiosity about AI, and evidence of self-directed learning are more meaningful signals than current tool familiarity. The specific tools a junior hire uses today may be superseded before they reach mid-career.

What are the red flags when a candidate claims strong AI experience? Watch for broad claims without specificity — candidates who describe using AI extensively but cannot name specific tools or describe concrete outcomes. Also watch for an absence of critical awareness: candidates who present AI as a seamless solution without acknowledging its limitations or the judgement required to use it well are often overstating their competence.

Is it fair to screen out candidates who have no AI experience? It depends entirely on the role. For positions where AI tools are central and the learning curve is steep, it may be a legitimate screening criterion. For most roles, screening out candidates purely on current AI experience — without assessing their capacity to develop it — risks losing strong candidates and introducing bias against those from industries with lower AI adoption.

How do we avoid AI experience becoming an arbitrary filter? By treating it the same way you would treat any other hiring criterion — grounding it in the actual requirements of the role, assessing it consistently across all candidates, and weighting it in proportion to how central it genuinely is to doing the job well. Document your reasoning and build in review points to ensure that AI experience requirements remain calibrated to what the role actually demands as the tools and the market evolve.