The fake document sitting in your inbox right now was generated in about 90 seconds. It cost almost nothing to produce. The forger doesn't need Photoshop skills, a print shop, or a working knowledge of the document they're copying. They just need a prompt.
That's the shift of 2026. Document forgery has been democratized. Synthetic document fraud jumped 311% between Q1 2024 and Q1 2025. The monthly volume of AI-generated document fraud grew roughly fivefold in just eight months of 2025. In a recent industry survey, 97.8% of fraud and risk leaders said they were worried about AI-enabled document fraud.
Here is the uncomfortable truth: the attacker does not need to fool your security team. They only need to fool the person in your organization who opens the PDF. That person is usually in HR, operations, or finance — and they were not hired to spot forgeries.
This post fixes that. Below are the seven red flags that catch most AI-generated fakes, the 60-second verification ritual any team can run, and a short list of what to do when something looks off. Print it, laminate it, tape it to the wall near the person who opens your mail — whatever works.
Why AI document fraud exploded in 2025
Two things changed at once.
First, generative AI collapsed the cost of forgery. A convincing W-2, diploma, pay stub, or certificate of insurance now takes minutes to generate, and the output is visually indistinguishable from a real one to an untrained eye.
Second, attackers shifted their targets downstream. Enterprise banks have fraud teams. A 40-person staffing agency, a boutique law firm, or a property manager with three leasing agents does not. According to Cyble's 2025 executive-threat monitoring report, AI-powered deepfakes were involved in more than 30% of high-impact corporate impersonation attacks. The return on effort for attackers is highest where defenses are lowest. That's you.
The good news: most AI-generated fakes still fail basic forensic checks — if you know what to look for.
Red flag #1: Font and kerning drift
This is the single most common AI-forgery tell. Language models that generate images of documents — or hybrid pipelines that generate text and then rasterize it into a PDF — frequently mis-render fonts between paragraphs. A letter might use Helvetica in one line and Arial in the next. Kerning (the spacing between letters) can shift subtly from one paragraph to another. Numbers often come out in a slightly different weight than the surrounding text.
How to spot it: zoom to 300% or higher on any suspect document. Compare the shape of the letter "a" at the top of the page to the letter "a" at the bottom. Compare numerals in a header to numerals in a footer. If they don't match perfectly, the document was probably assembled by a model, not typed by a human.
Red flag #2: Metadata mismatch
Every PDF carries metadata: the software that created it, the author name, the creation date, and sometimes the machine it came from. AI-generated PDFs almost always have metadata that contradicts the document's claimed origin.
A "medical report from Johns Hopkins" with creation software listed as macOS Quartz PDFContext and an author field of User is suspicious. A "vendor invoice from 2023" with a creation date of last Tuesday is suspicious. A "signed contract" with no author field at all is suspicious.
How to spot it: right-click the PDF → Properties (Windows) or Command-I on the file (Mac). The metadata is two clicks away. On Adobe Acrobat: File → Properties. Spend thirty seconds there before trusting anything important.
Red flag #3: Missing or broken audit trail
A genuinely e-signed document has a paper trail: a certificate block, a timestamp, a signer identity, and usually an audit page at the end of the PDF showing exactly who did what and when. AI-generated "e-signed" documents almost never reproduce this correctly. The attacker can fake the visible signature. Faking the audit trail — with valid cryptographic timestamps and signer certificates — is exponentially harder.
How to spot it: scroll to the last page of any document claiming to be e-signed. Look for the audit trail. Is there a list of signer actions (viewed, signed, declined)? A timestamp tied to a trusted time authority? An IP address and device fingerprint for each signer? If any of that is missing — or if the whole audit page is absent — treat the document as unverified.
Red flag #4: Page-level visual drift
This one catches sophisticated forgeries. AI-assisted fraud often involves page swapping — taking a real signed document and replacing one or two pages with modified versions. The forgery can look perfect on any given page, but patterns shift subtly across pages.
How to spot it: quickly flip through all pages side-by-side. Are the margins consistent? Do page numbers align? Are headers and footers positioned identically on every page? Does the background texture (if any) match across pages? Even one page out of pattern — shifted by a few pixels, with a slightly different header color — is usually a sign of page insertion. This is why QR-verified documents embed a hash of the entire file, not just a signature block; swap one page and the hash changes, and the verification fails loudly.
Red flag #5: Suspicious QR code routing
Not every QR code on a document is a good thing. Attackers have caught on to verification patterns and now slap fake QR codes onto fake documents to create false reassurance. The tell is in where the QR code leads.
How to spot it: scan the QR code with any phone camera, but before tapping, read the URL. A legitimate verification QR code points to the issuer's real domain — verifydoc.ai/verify/[document-id], for example — and the verification page cites the specific document ID visible on the document itself. Red flags include:
A URL on a domain you've never heard of or that looks like a lookalike (verifydoc-ai.com, verify-doc.net, etc.)
A URL that uses a URL shortener (bit.ly, tinyurl) — legitimate issuers don't obscure their verification endpoints
- A verification page that doesn't reference the specific document ID or recipient
A verification page that returns "Verified" no matter what document you upload
A real QR code binds you to a real verification record. A fake QR code is just a shortcut to a Potemkin webpage.
Red flag #6: Manufactured urgency
This is the social-engineering layer, and it's now the single most reliable fraud tell in 2026.
Deepfake-assisted fraud leans heavily on urgency to prevent verification. The forger knows that if the target has an hour, they'll spot inconsistencies. If they have ten minutes, they probably won't. So the pressure comes baked in: "we need this signed before close of business," "the wire has to go out today," "the CEO is on the road and needs this approved now," "there's a waiting list, first to sign gets the deal."
How to spot it: any time you feel pressured to process a document before you can verify it, treat that pressure itself as evidence. A legitimate counterparty will almost always wait five minutes for verification. A fraudster cannot afford the delay because verification is exactly what they're trying to prevent.
This is so reliable that it's worth building into policy: documents tied to artificial deadlines must be verified before they're actioned, without exception.
Red flag #7: Off-channel delivery
The last red flag is about how the document arrived, not what's in it.
Every business has a normal channel for each document type. Vendor invoices come from a known email domain. Offer letters come through the HR platform. Insurance certificates arrive via the broker portal. When a document suddenly shows up through an unusual channel — a personal Gmail, a WhatsApp attachment, a DM on LinkedIn, a DocuSign-clone domain — that is meaningful signal, even if the document itself looks perfect.
How to spot it: ask yourself, is this the channel this document type usually comes through? If the answer is no, verify out-of-band before doing anything else. Call the counterparty on a phone number you already have (not one on the document itself). Re-request the document through the expected channel. Two minutes of friction eliminates most fraud.
The 60-second verification ritual
You can run this on any incoming document, in under a minute, without any special tools. Teach it to anyone who opens documents on your team.
Channel check (5 seconds). Is this the channel I normally receive this document type through?
Metadata peek (15 seconds). Right-click → Properties. Does the creation software, author, and date match the claimed origin?
Font and kerning scan (10 seconds). Zoom to 300%. Do the fonts look consistent across the document?
Audit trail check (15 seconds). Scroll to the last page. Is there a signer audit trail with timestamps, IPs, and signer certificates?
QR verification (15 seconds). Scan any QR code. Does it lead to the issuer's real domain? Does the verification page cite the specific document ID?
Five checks. Under a minute. Catches the overwhelming majority of AI-generated fakes.
For a deeper, more technical walkthrough, see our guide on how to verify a signed PDF.
What to do when you spot a fake
Stop. Do not action the document. Do not reply directly to the sender using the thread the document arrived on — if the sender is compromised or spoofed, you'll be replying to the attacker.
Then, in order:
Contact the purported sender through a known, separate channel — a phone number from your records, not the document. Ask whether they sent it. If they didn't, you've caught the fraud.
Preserve the evidence. Do not delete the email or document. Save the original file with headers intact. Take screenshots of the delivery context (subject line, sender, timestamp, any suspicious URLs).
Report it internally. Flag it to whoever owns security, IT, or operations. If the attack used your CEO's or CFO's identity (as deepfake attacks often do), flag it to that person directly through a trusted channel.
Report it externally if warranted. In the U.S., the FBI's IC3 takes reports on business email compromise and document fraud. If the fraud involves a wire transfer, your bank has a 24-to-72-hour window to claw back funds — speed matters.
Close the vector. If the fraudster impersonated a specific vendor or partner, assume other employees may receive the same attack. Send a brief internal note describing what was attempted.
Build verification into the culture, not the heroics
Most fraud that gets through an SMB doesn't get through because the forgery was flawless. It gets through because verification was nobody's specific job. A single person opens the PDF, doesn't want to slow things down, and hits approve.
Two small cultural shifts eliminate most of this risk. First: make verification a five-second default, not a ten-minute project. The ritual above is designed for this — under a minute, on every document that matters. Second: make it safe to pause. Nobody should feel their job is at risk for taking 90 seconds to verify a document, even when the request comes from someone senior. Especially then.
The teams that handle 2026 well don't have fancier tools. They have faster reflexes and clearer norms.
- Frequently asked questions
Can AI generate a perfect fake PDF?
Visually, often yes. But AI-generated PDFs still fail on metadata, audit trails, and cryptographic hashes. That's why modern verification leans on signatures and QR-linked records, not on visual inspection alone.
Are scanned fake documents harder to detect than AI-generated ones?
Different failure modes. Scanned fakes often have compression artifacts and alignment issues. AI-generated fakes often have font drift and metadata mismatches. Both fail a QR or hash verification — which is why that check is the ultimate backstop.
How do I verify a document that doesn't have a QR code?
Ask the issuer to re-send through their official verification portal, or confirm directly with a known contact on a separate channel. For documents you issue, adding a QR code and certificate of authenticity removes this friction entirely for your recipients — see our pillar guide on how to verify document authenticity.
Is this just a problem for big companies?
No. The attack has shifted toward SMBs specifically because they're less defended. Cyble's 2025 data showed AI-powered deepfakes in over 30% of corporate impersonation attacks, and most victims were mid-market and below.
Should I use an AI detector tool on suspicious documents?
AI-content detection tools can be useful as a sanity check, but they're noisy — false positives are common, and false negatives even more so. Treat them as one input, not a verdict. The more reliable signal is the five-step verification ritual above, combined with cryptographic verification when the document supports it.
Where to go from here
The core shift of 2026 is that you cannot trust the look of a document — only its verifiable provenance. Visual inspection alone is now a bronze-age defense against a laser-age attack.
The defensive playbook is two-sided. Inbound: train whoever opens your documents to run the 60-second ritual on anything that matters. Outbound: issue your own documents with QR codes and certificates of authenticity so your recipients can verify them without calling you — and so your good documents are never confused with fakes.
For the full issuer playbook, read our pillar guide: How to Verify Document Authenticity in 2026. If you want to dig into the cryptography behind signatures, read Electronic Signature vs. Digital Signature.
Want to make every document your business issues independently verifiable? Try VerifyDoc.ai free and attach a certificate of authenticity to your first document in under five minutes.