What Makes Text Feel Human? 7 Signals AI Still Can’t Fake
You can spot AI text in seconds. Not because of what it says—but because of what it doesn’t do wrong.
Read any paragraph generated by a large language model and something feels off. The grammar is flawless. The punctuation is perfect. Every sentence follows a predictable rhythm. There is not a single typo, no stray comma, no run-on sentence, no moment where the writing stumbles or corrects itself mid-thought. The text is polished to an uncanny degree, and that very polish is what gives it away.
Human writing is defined as much by its imperfections as by its content. We drop commas. We overuse certain words. We start sentences with “And” or “But.” We make physics-based typing errors that follow the geometry of our keyboards. These are not flaws to be corrected. They are the fingerprint of a human mind working through language in real time, and their absence is the clearest marker that a machine produced the text.
This article examines the seven signals that make text feel human—and explains why one of them, typing errors, is both the strongest signal and the hardest for AI to convincingly replicate.
The 7 Signals of Human Writing
1. Typos and Keyboard Errors
This is the most powerful signal of human authorship, and the one that AI gets most consistently wrong. When humans type, their fingers interact with physical keys on physical devices. A slight drift to the left turns “the” into “thr” because “e” and “r” are adjacent on a QWERTY keyboard. A hurried thumb on a phone touchscreen misses the spacebar and produces “thequick.” A moment of fatigue doubles a keystroke: “keyboaard.”
These errors are not random. They follow the laws of physics—key proximity, finger reach, device touch radius, typing speed. Real typos are predictable in their distribution even when they are unpredictable in their exact position. This makes them extremely difficult to fake convincingly. Random character insertion produces errors that no human hand would ever make, and readers detect the difference subconsciously. We will return to this signal in depth later in the article, because it is the one that matters most for anyone trying to make text feel authentically human.
2. Inconsistent Formatting
Humans are terrible at consistency. We forget periods at the end of list items. We capitalize inconsistently. We use an em dash in one paragraph and a hyphen surrounded by spaces in the next. We mix “don’t” with “do not” within the same paragraph, not for stylistic effect but because we are thinking about what we want to say, not how we are formatting it.
AI-generated text almost never makes formatting errors. Every list is styled identically. Every quotation mark is a smart quote. Every sentence ends with a period. This mechanical consistency is one of the first things that triggers the “this was written by a machine” response in experienced readers.
3. Sentence Length Variation
Read any AI-generated paragraph and measure the sentence lengths. They tend to cluster around a narrow band—typically between 15 and 25 words, with remarkably little variation. The rhythm is metronomic.
Human writing swings wildly. A long, winding sentence that explores multiple ideas in a single breath is followed by a fragment. Then a question. Then something medium-length. Then another long one. The variation is not random—it reflects the writer’s thought process, their emphasis, their breath.
Gary Provost demonstrated this famously: “This sentence has five words. Here are five more words. Five-word sentences are fine. But several together become monotonous. Listen to what is happening. The writing is getting boring. The sound of it drones. It’s like a stuck record.” Then he breaks the pattern with a long, flowing sentence, and the text comes alive. AI tends to stay stuck in the middle range, producing text that reads smoothly but never breathes.
4. Filler Words and Hedging
Humans hedge constantly. We write “I think,” “sort of,” “probably,” “kind of,” “it seems like,” and “in a way.” We use discourse markers like “well,” “so,” “anyway,” and “actually.” These words carry little semantic content but enormous social information. They signal that the writer is a person with uncertainty, with a perspective, with a relationship to the reader.
AI text tends to be either confidently declarative or hedges in a formulaic way (“It’s important to note that...”). The organic, natural hedging that pervades real human communication—where someone writes “I guess” not because they are uncertain but because they want to soften a strong claim—is something language models struggle to reproduce authentically.
5. Context-Specific Slang and Idiom Misuse
Humans use slang imprecisely. We mangle idioms. We say “I could care less” when we mean the opposite. We write “for all intensive purposes” instead of “for all intents and purposes.” We use regional expressions that only work in specific communities, and we sometimes deploy them slightly wrong, which paradoxically makes them feel more genuine.
AI models have learned the correct forms of idioms from enormous training corpora. They rarely get an idiom wrong, which is ironically what makes their usage feel sterile. A human who writes “it’s a doggy dog world” sounds more authentically human than an AI that correctly writes “it’s a dog-eat-dog world.”
6. Emotional Tone Shifts
Human writing changes emotional register within a single piece. A professional email might start formally, warm up in the middle with a joke or personal aside, then close with a different tone entirely. A blog post might swing from analytical to frustrated to amused within a few paragraphs. These shifts reflect genuine emotional states that change as the writer works through their ideas.
AI tends to maintain a consistent emotional tone throughout. If it starts professional, it stays professional. If it starts casual, every paragraph stays equally casual. The emotional flatline is subtle but noticeable, especially in longer pieces where a real human writer would inevitably let their mood color the prose.
7. Imperfect Structure
AI text loves structure. Introduction, three body paragraphs with topic sentences, conclusion with a restatement of the thesis. Every section is roughly the same length. Every argument follows a logical progression.
Humans write messily. We go on tangents. We circle back to a point we made three paragraphs ago. We devote half the article to the part that interests us and rush through the rest. We sometimes end abruptly because we have said what we wanted to say. We break our own outlines. The messiness is not a bug—it is a signal that a human mind was navigating ideas in real time, not executing a pre-computed template.
Why Typos Are the Hardest Signal to Fake
Of the seven signals above, typos occupy a unique position. They are the strongest marker of human authorship and the most difficult to replicate convincingly. Here is why.
Most approaches to adding errors to text use random character mutation: pick a position, pick a replacement character from the alphabet, and swap. The result is text with errors, but not text with human errors. “keyboard” becomes “keybxard” or “keyb*ard.” No human types like that. A human would produce “keybiard” (adjacent key hit on a QWERTY layout), “keybard” (skipped character), or “keybaord” (transposed pair).
The difference is physics. When your finger drifts, it drifts to a neighboring key, not to a random location on the keyboard. The probability of hitting any given wrong key is a function of physical distance from the intended target. This means real typing errors carry a signature—a statistical pattern that readers have internalized from a lifetime of seeing (and making) typos. When the pattern is wrong, the errors feel synthetic even if the reader cannot articulate why.
The Research Behind Physics-Based Errors
This is not speculation. Research from the CHI 2025 conference (Shi et al., “Simulating Errors in Touchscreen Typing”) developed computational models of touchscreen typing errors based on motor control noise. Their findings confirm that typing errors are governed by physical factors: finger position noise, key proximity, device touch target size, and the biomechanical constraints of hand movement.
Earlier work by Dhakal et al. (2018), analyzing over 136 million keystrokes from 168,000 volunteers, identified distinct clusters of typists with different error distributions. Fast typists use more fingers and make different kinds of mistakes than slow typists. The errors are not random—they are systematic, and the system is physics.
Device type matters enormously. Phone touchscreens have a wider adjacent-key hit radius than physical keyboards because a thumb covers more key area than a fingertip. Tablet keyboards produce a middle ground of errors. Swipe typing introduces entirely different failure modes. The same typist makes fundamentally different errors on different devices, and those device-specific patterns are something readers recognize instinctively.
Why Random Noise Fails
Adding random character errors to AI text actually makes it more detectable, not less. Autocorrect algorithms, spell checkers, and even casual readers are tuned to expect physics-based error patterns. When errors don’t follow those patterns, the text feels wrong in a way that is worse than having no errors at all. You have moved from “too perfect” to “wrong kind of imperfect,” and the second is more jarring.
To convincingly simulate human typing errors, you need a model that understands keyboard geometry, key adjacency, device-specific touch targets, and the probability distributions of different error types (adjacent key hits, transpositions, omissions, doubled characters, spacing errors, punctuation slips). You need errors that are grounded in how fingers actually interact with input devices.
What This Means for Content Creators
If you use AI to assist with writing—and most content creators now do—the challenge is not generating good text. The challenge is making that text feel like it came from a person. The seven signals above are a roadmap for understanding what “human” means in the context of written text.
Some of these signals are matters of craft. You can train yourself to vary sentence length, use hedging language naturally, break structure intentionally, and shift tone. These are writing skills that complement AI assistance.
But typos are different. They are not a writing skill—they are a physical artifact of the typing process. You cannot simply scatter random errors into polished text and expect it to feel authentic. The errors need to follow the same physics-based patterns that real typing produces. They need to respect keyboard layout, device type, and the biomechanical constraints of human hands.
This is exactly what LikelyTypo does. It is a web-based tool that generates realistic typing errors by modeling the physical act of pressing keys. Instead of random character mutation, it uses keyboard adjacency maps, device-specific touch models, and typing profiles to produce errors that look like they came from a real person on a real device. You can paste any text, select a device and typing profile, and instantly see what that text would look like with authentic human typing errors—the kind that follow physics, not dice rolls.
Authenticity, Not Deception
The goal is not to trick anyone. The goal is authenticity. When you write a chatbot response that includes a subtle typo, it feels more human—not because you are deceiving the user, but because you are acknowledging that real communication is imperfect. When you populate a UI demo with text that includes realistic errors, the demo feels more like a real application. When you test your autocorrect system with physics-based typos, your tests reflect what actual users will type.
Imperfection is not a flaw to add cynically. It is a dimension of human communication that has been accidentally erased by AI, and restoring it is a matter of honest craft.
Try It Yourself
The fastest way to understand what makes text feel human is to see the difference between random errors and physics-based errors. Open the LikelyTypo interactive showcase, paste a paragraph of AI-generated text, and generate typos with the default settings. Look at the errors that appear. They will be adjacent key hits, transpositions, skipped characters, spacing slips—the same kinds of mistakes you make every day when typing quickly. Now imagine the same text with random character substitutions: “thx quicj broen fox.” One feels human. The other feels like data corruption.
Try switching between device types to see how errors change. Phone tap produces different patterns than a physical keyboard. Try different typing profiles—subtle for professional content, typing-fast for casual messages. Each combination produces a different but always plausible set of errors, because each is grounded in the physics of how people actually type on that device.
See what realistic typos look like
Paste any text and instantly see physics-based typing errors. Switch between devices, profiles, and layouts to see how error patterns change.
Try the interactive showcaseAI writes perfectly. Humans don’t. The difference is not a problem to solve—it is a signal to understand. And of all the signals that make text feel human, typos are the most powerful, the most physics-grounded, and the most possible to restore authentically.