top of page

"Would You ‘Cheat’ on an Interview? There’s an Ai for That"

person sitting in front of computer with a robot on screen

In the age of AI, the concept of “cheating” on a job interview is being reinvented. A startup called xxxx provides real-time AI assistance during interviews—helping users generate responses on the fly. (I will not be providing names as I do not personally condone this type of assistance) Since its release on Product Hunt, it has attracted tens of thousands of users and drained considerable venture capital, all while sparking fierce debate on whether this is innovation or deception


Candidate Perspective: Resourceful or unethical?

Candidates are increasingly deploying AI tools to ease their stress, sound more polished, or even answer questions in real time. Some recruiters report candidates glancing at their phones during Zoom interviews, repeating prompts out loud before feeding them into LLMs.


Others in tech hiring say up to 80% of candidates used AI during take‑home coding tests, even when explicitly told not to (Pragmatic Engineer Newsletter+7Karat+7The Times of India+7)


This leaves candidates in an ambiguous position: are they undermining their integrity or simply behaving resourcefully? The ethical boundaries blur further when recruitment leaders ask: “If I can’t tell the difference between a chatbot-generated and genuine answers, do I even know what I’m hiring for?” (Corporate Compliance InsightsKarat)


Agency Perspective: Loss of control, demand for quality


Recruitment agencies have for years developed a process optimised more for speed than substance. The commoditisation of agency services means job specs get lazily crafted and presented to candidates (e.g. “Trading Systems Product Manager” JD sent to me today). AI‑enabled cheating only accelerates this decline: agencies risk presenting candidates whose skills are assembled by bots, not lived experience.


Agencies must reclaim value by insisting on higher standards and transparency on AI‑usage. They should demand face-to-face interactions where authenticity and culture‑fit can be properly assessed (and selling that value to clients). The rise of AI cheating tools is a call to action:


What should agencies do: evolve from transactional recruiters to trusted outcome advisors.


Hiring Organisation Perspective: Trust, bias, and the return of in‑person


Companies such as Meta are now exploring alternate approaches, letting candidates use AI assistants in coding interviews—arguably in recognition that AI literacy is a skill in itself (The Times of India+2TechRadar+2Yahoo News+2)

Meanwhile, firms like OptimHire are automating job‑search agents and recruiter bots, reinforcing a new “AI‑to‑AI” hiring ecosystem Business Insider.


None of this absolves companies from oversight. Agencies and clients need transparency in tools, regular audits for bias, and real human oversight in decision-making to preserve equity and quality


Why face‑to‑face interviews are making a comeback


Because AI coaching and deception happens off‑camera, in‑person interviews once again must become the gold standard for authenticity. When interviews are done live and face to face it’s much harder to hide behind prepared scripts or AI-generated answers.


Three core shifts are essential:


  • Clients must emphasise face‑to‑face gut checks, signalling they care about cultural fit, not just output.

  • Recruiters must reposition: instead of processing transactional volume, they must curate interviews that surface real depth.

  • Candidates must insist on in‑person opportunities—and demand that AI assistance, if used, is transparent and complementary, not deceptive.


Connecting to the “Future of Work” framework


In your Future of Work framework, successful AI change isn’t just about deploying tools—it’s about changing culture. Remote-first culture, shaped in pandemic-era haste, needs recalibration. Agencies and clients alike must shift from speed‑driven metrics to trust‑based outcomes. It’s time to rebuild human-centric recruitment practices around face‑to‑face connection, guided by accountability and ethical clarity.


“When a candidate suddenly shifts gaze off‑screen mid‑interview, it raises flags. It’s not cheating necessarily per-se, but it’s a lack of respect for the process. We need interviews where the candidate shows up as themselves.” Recruitment Leader, London agency


Three things you could be doing today!


  1. Audit your interview process: Assess where AI‑assisted deception might be skewing candidate evaluations and reintroduce in-person assessments early in the process.

  2. Redefine recruiter value: Agencies should shift messaging from “fast placement” to “verified assessment,” insisting on human‑led interviews with stakeholders.

  3. Adopt transparent AI policies: Inform candidates and clients clearly about where AI is allowed—prepare for disputes or reputational risk if usage is covert.


Are face to face interviews the answer? 👇🏻 let me know your thoughts!





 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page