
How to Protect an IT Business from Fake Candidates
You’ve found a Senior Backend Engineer with a flawless CV. They have 8 years of experience, including roles at two well-known FinTech startups. During the tech interview, the candidate confidently breaks down a case for building a highload API, explains their choice of technologies, and outlines a clear logic for service distribution.
You extend an offer and expect the specialist to quickly start tackling critical tasks… However, three weeks later, the first red flags begin to show up: trivial coding mistakes and a lack of understanding of basic architectural patterns, accompanied by constant “technical issues” with the camera. Later, it turns out that the specialist you hired can’t even write a simple unit test. They are unqualified, and a proxy specialist from another country actually conducted their “perfect” interview.
Fake candidates — this case is a new harsh reality. Deepfakes and AI avatars make it possible to mask identity during video calls, while digital platforms allow fraudsters to easily find CVs of real people. We explain how to identify fake candidates in recruitment and, more importantly, how to make the recruitment process not only protected but also resilient to deception at a systemic level.
How the Trend of Fake Candidates Emerged
Deepfake avatars today are capable of imitating faces and voices, while AI text generators can create flawless resumes in a matter of minutes (or find such CVs created by real candidates). The phenomenon of fake candidates in IT has grown from a combination of several factors:
Globalization + Remote Job
In the past, a candidate would come into the office, meet with several managers, shake hands, and leave a “physical impression.” Nowadays, all interaction has been reduced to a few clicks in Zoom. Online interviews, which have become the new norm, have opened the door to manipulation. A camera no longer guarantees authenticity: backgrounds can be altered, faces can be tweaked, and voices can be artificially generated.
According to the HireRight 2025 Global Benchmark Report, 9 out of 10 companies in Europe, the Middle East, and Africa (EMEA) encountered discrepancies in candidate data before official employment. The largest number of inconsistencies were found in previous work experience (64%) and education (47%). More than 22% of companies discovered undisclosed criminal records among candidates.
Rapid Development of AI and Deepfakes
Five years ago, creating a convincing video of another person was technically challenging. Today, it can be done in minutes using free software. AI tools can now mimic photos, voices, facial expressions, accents, and even emotional reactions during a conversation.
Commercialization of Fraud
There are already platforms offering fake staffing on the gray market, where you can “rent” a candidate who’ll pass the technical interview on your behalf.
4 Typical Fake Candidate Schemes in IT
Fake candidates seem abstract until a company encounters them firsthand. Below are examples showing how inventively fraudsters adapt to modern recruitment methods.
1. A “Senior Developer” Who Isn’t Senior at All
Imagine hiring a seemingly perfect candidate who aced the technical interview, only to discover after onboarding that they can’t write a single line of code. This is a textbook case of proxy fraud, where someone else takes the interview on the candidate’s behalf.
These schemes are meant to secure access to high-paying projects without the required skills. The company wastes time and resources on the hire and is left with an employee who adds no value and needs to be replaced almost immediately.
2. Deepfake During the Interview
In 2023, at the Microsoft Ignite event, the company presented a tool capable of creating photorealistic avatars and synthesizing voices that imitate a specific person’s speech. This technology allows the generation of videos where an avatar speaks scripted text, including statements the real person never said. Although Microsoft did not report specific cases of using this technology for interviews, the ability to create such avatars increases the risk of their use in hiring.
During an interview, a realistic face may appear on screen, with voice and facial expressions synchronized to speech. Such avatars help “send” someone else to the interview or hide one’s real identity.
3. “Rent-a-Candidate”
A company or group of intermediaries creates dozens of profiles for supposedly experienced engineers, gets them through screening, and then sells a “specialist” to a client company. Once the contract starts, a completely different person shows up to do the work.
These agencies charge a flat fee per profile, add extra fees for preparation and interview screening, sell subscription access to large databases of ready-made CVs, and may even take a cut of the first salary or contract — sometimes reselling the same profile to multiple people at the same time.
4. AI Prompting Tool
This is the latest tool in the fake candidates’ toolbox, made possible by large language models (LLMs). Specialists use dedicated AI tools (Interview Copilots) that work in real time.
The software “listens” to the interviewer’s questions during the call, instantly processes them using AI, and displays a ready-made answer on the screen. The candidate only needs to read the text (often from a phone screen placed near the camera). This tactic is especially common in tech interviews, where AI can generate deep answers to algorithmic questions.

“I’ve had several cases of communicating with fake candidates. In particular, a case where a candidate joined an interview and used an AI tool during a video call.
Most often, such specialists can be identified by the following signs:
- They type extremely fast in messengers.
- Candidates from Asia often use translators, which is noticeable from message errors. If you add a bit of slang, their ‘inauthenticity’ becomes apparent even faster.”

“At one point, I had an ‘influx’ of candidates for a Senior C++ Developer position. They were Asians posing as Poles (hiring was conducted specifically in this location). In the middle of summer, over a few weeks, I had about five calls with such specialists. I started suspecting fakery already when reviewing their CVs where companies with very atypical names were listed, not typical for Poland or Europe in general. The developers’ names were also not very ‘Polish.’
Final confirmation came during an English testing call. Some used AI effects that ‘Europeanized’ their appearance and voice. I asked them to say something in Polish. Obviously, no one could 🙂
Later, I read an article stating that this was a scam attack by North Korean pseudo-developers on European and American companies.”
Should You Be Worried: Risks for a Company If You Hire a “Fake”
We’ve highlighted why an undetected fake candidate can cause not only financial losses, but even greater damage.
Data Leaks and Reputational Risks
The biggest threat is when a fraudster gains access to internal resources under the guise of an employee. This opens the door to:
- leakage of commercial and confidential information;
- unauthorized access to contracts, financial, or legal documents;
- insight into internal processes, standards, and workflows;
- planning fraudulent activities, theft of intellectual property, or unfair competition.
These risks are especially high in international and remote teams, where system access is automatically granted after signing a contract and NDA. In April 2025, hundreds of specially trained hackers from North Korea infiltrated American companies — including Fortune 500 firms — posing as developers.
US intelligence agencies estimate their “earnings” at tens of millions of dollars, likely directed to sanctioned North Korean weapons programs. But money wasn’t the only goal: some attackers sought access to confidential codebases and client systems to steal intellectual property and trade secrets, while others collected consumer data for future attacks.
For cover, they posed as freelancers from South Korea, Japan, or Latin America, using fake LinkedIn profiles, stolen or borrowed documents, and AI-generated resumes. They also involved US citizens who opened bank accounts to receive salaries and transfer money abroad.
Additional Consequences
Catching a fake candidate too late can derail project timelines and drag down team productivity. These “employees” often lack the skills they claim to have, leading to delays, bugs in code or documentation, and extra work for teammates who have to clean up the mess.
There are also serious legal and financial implications. If a fake candidate handles sensitive data, client databases, or works in a regulated industry such as finance, healthcare, or energy, the company may face fines, lawsuits, or regulatory penalties for breaching security and compliance requirements.
How Not to Fall for a Fake: A Guide for Recruiters
To keep risks in check, recruiters and hiring managers should use a mix of technical checks, interview behavior analysis, document verification, and other assessment methods. This checklist lays out practical steps to quickly weed out unreliable profiles:
✅ Check live video. Conduct a short interview with the camera on, where the candidate has to demonstrate skills live. You can give a small code snippet, SQL query, logic task, or case. Or ask the candidate to write or edit code, a document, or a diagram in real time.
Tip: An AI avatar will struggle to confidently respond to spontaneous follow-ups, such as: “Why did you choose this approach?” or “How would you optimize this code if the input data changed?” If answers sound too abstract, without examples or details, this may indicate a scripted scenario or prompts via an earpiece.
Add improvisation: “Imagine this API returns an error — what’d you do?”, “How would you explain this solution to a trainee?”, “What would you change if you had only 2 hours before release?” Delays, irrelevant answers, or attempts to change the topic can signal inauthenticity.
✅ Check audio/video for anomalies. Deepfake technologies are becoming more realistic, but a careful eye (and ear) can still detect them. A “plastic” face, lip-sync issues, unnatural shadows, inconsistent gaze, voice tremors, odd pauses, or strange compression may indicate synthetic media. Ask the candidate to perform a real-time action, such as leaning closer to the camera, changing lighting, or showing an object nearby.
✅ Pay attention to time zones. Fake candidates in IT often claim to live in Europe or the US but respond only during Asian working hours.
✅ Review career trajectories. Frequent job changes, short contracts, abrupt transitions, or inconsistent dates and role descriptions may signal hidden gaps. Vague reasons for leaving roles can also be a red flag.
Analyze the reasons behind resignations: if they’re often explained in vague or unclear terms, it may be a sign of fabricated experience. Job hopping isn’t necessarily a red flag on its own, but combined with other warning signs, it’s worth taking a closer look at the candidate.
✅ Try a different communication format. If numbers look good but intuition says otherwise, switch formats. Move from video to audio-only if you suspect a deepfake or proxy candidate. AI avatars often “freeze” without video, while real people relax and speak more naturally.
If possible, invite the candidate for a short in-office or coworking meeting, or add another technical specialist or HR partner. Multiple perspectives help catch inconsistencies.
“When analyzing resumes, I immediately look for these red flags:
- The photo looks like a stock image or AI-generated (it has overly smooth textures and bright colors).
- When checking candidates’ social media, it turns out there is no activity at all (responses to posts, comments, etc.).
- There may be strange experience dates: for instance, working full-time in several places at the same time.
- The CV is too perfect: only top technologies are listed, but there are no project descriptions.
- Role descriptions are too generic: there are no specific tasks, achievements, numbers, or details.
- There are no reviews or recommendations on the profile from previous colleagues or managers.
- Certificates that are difficult to verify (or do not match the candidate’s level).
- Technologies and tools are listed chaotically, with only the most “top” technologies included, or technologies that do not go together at all.
- It is also worth paying attention to the education section: for example, listing the National University of Singapore or universities that do not exist.
As an option, fake profiles can be blocked on LinkedIn and messengers to avoid repeated contact. Reports can also be filed so such profiles are removed or frozen.”
Inna Poremska, Technical Recruiter at ITExpertFake candidates applying to jobs didn’t show up because the market went south, but because digital processes left the door wide open. As hiring moves to video interviews, online tests, and automated checks, fraudsters find more room to slip through the cracks. Recruiters will have to be the first to adjust — learning how to identify fake candidates in a video interview and constantly tightening their vetting playbook.
How useful was this post?
Click on a star to rate it!
Average rating 3 / 5. Vote count: 2
No votes so far! Be the first to rate this post.


