I put one of my blog posts that i wrote entirely myself into GPTZero and the similarly named ZeroGPT. GPTZero said it was a mixed bag. ZeroGPT said that 4.11% was AI. 🤔

That was the post that kicked off this whole thing in my head. Because here's the uncomfortable truth that nobody seems to be talking about: AI detection tools are ableist. Not accidentally, not as an unfortunate side effect, but fundamentally and systematically ableist in their design, implementation, and deployment.

They're profiling neurodivergent writing as machine-generated. And if you're autistic, have ADHD, or otherwise think and communicate in ways that diverge from neurotypical patterns, congratulations – you're now statistically more likely to be flagged as a language model by systems that were built without considering your existence.

Which is brilliant, isn't it? After spending your entire life being told your communication style is "wrong" or "unnatural," you can now add "indistinguishable from a robot" to the list of charming feedback about your authentic self-expression. We've automated ableism and packaged it as a security feature.

The Ableism Isn't Subtle

Let's be absolutely clear about what's happening here: AI detection tools are encoding neurotypical communication patterns as the definition of "human" and flagging anything that deviates as suspicious. This is textbook ableism – taking the dominant neurotype as the baseline and treating neurodivergence as aberrant.

These tools look for things like:

  • Consistent vocabulary patterns

  • Precise, structured sentences

  • Formal grammar and punctuation

  • Lack of colloquialisms or "messy" language

  • Topic consistency without tangents

  • Predictable paragraph structures

Notice anything? These are exactly the traits that many autistic and ADHD writers naturally exhibit. We're methodical. We like structure. We often have hyperlexic tendencies that mean our vocabulary is expansive and precise. We learned to write from books rather than playground conversations.

This isn't a bug in the system. It's the system working exactly as designed – designed, that is, by and for neurotypical people who never considered that "human writing" might look different for a significant portion of the actual human population.

And now we're being told that our authentic writing – the writing that comes most naturally to us, the writing we produce when we're being most genuinely ourselves – looks like it was generated by a machine.

The message is clear: write like a neurotypical person, or be suspected of not being a person at all.

A History of Being Told We're Not Human Enough

This isn't new territory for neurodivergent people. We have a long, exhausting history of being told that our natural ways of being are "robotic," "mechanical," "lacking in human warmth," or just generally "wrong."

Autistic people, in particular, have been fighting against the narrative that we're somehow less human for... well, for as long as autism has been recognised as a thing. We've been described as "lacking empathy" (we don't; we just express it differently). We've been told we're "missing social instincts" (we're not; the instincts are just different). We've been characterised as "emotionless" (absolute bollocks, as anyone who's experienced autistic meltdown can attest).

And now, in 2025, we've got AI detection tools that have essentially automated that same dehumanisation. The algorithm has decided that precise language, structured thinking, and consistent communication patterns are markers of non-human intelligence. Never mind that these are also markers of autistic intelligence, ADHD hyperfocus, or just being a careful writer who gives a shit about clarity.

We spent decades fighting to be recognised as fully human despite communicating differently. And just when that battle was starting to make progress, technology companies have built tools that encode the same prejudices into software, giving them the veneer of objectivity that algorithms always seem to acquire despite being built by humans with human biases.

The Technical Ableism (Because Yes, Code Can Be Ableist)

Here's the thing about AI detection tools: they're trained on datasets. Those datasets contain examples of "human writing" and examples of "AI writing," and the model learns to distinguish between them by finding patterns.

But who decided what counts as "human writing" in those training sets?

If you're training your model primarily on neurotypical writing samples, you're teaching it that neurotypical communication patterns are the definition of human. Everything else becomes suspicious by default. This is exactly how algorithmic bias works – you encode the prejudices of your training data into your model, then pretend the model is objective because it's just "following the math."

Except the math is ableist from the ground up.

These tools are essentially doing what psychologists and educators have been doing to neurodivergent people for decades: establishing a "normal" baseline, measuring everyone against it, and flagging anyone who deviates as problematic. The only difference is that now it's automated, which makes it faster, more widespread, and harder to challenge because "the algorithm said so."

And make no mistake: this is a feature, not a bug. When you don't include neurodivergent writing in your training data (or include so little that it becomes statistical noise), you're making a choice. When you define "human-like" writing based exclusively on neurotypical patterns, you're making a choice. When you deploy these tools without considering their impact on disabled users, you're making a choice.

These are choices rooted in ableism. The assumption that neurotypical communication is the default, the standard, the only valid form of "human" expression – that's ableism. And it's baked into the fundamental architecture of these detection systems.

When Automated Ableism Calls You Artificial

Let me return to that original post, because it's not just a funny anecdote about a wonky algorithm. It's a perfect encapsulation of how these systems fail neurodivergent people.

I Accidentally Did a DoS Attack On My PDS - Ewan’s Blog
How not to make a first impression.
https://blog.ewancroft.uk/3m5mzwbrisc2v

I wrote a blog post. Entirely myself. No AI assistance. Just me, my thoughts, my keyboard, and my autistic brain organising information in the way that feels natural to me – which happens to be structured, precise, and methodical.

I ran it through GPTZero out of curiosity. "Mixed bag," it said. Part human, part AI. ZeroGPT was more specific: 4.11% AI-generated.

Now, I can tell you with absolute certainty that 0% of that post was AI-generated. So where did that 4.11% come from?

From the bits where I was being most authentically myself. The sections where my autistic brain was doing its thing – organising complex information into clear structures, using precise vocabulary, maintaining consistent tone. The parts where my writing was most clear, most organised, most me.

Those were the bits that looked like a robot.

Because apparently, when autistic people write in ways that are natural to us, we're "not human enough" for the algorithm. Our authentic communication patterns have been defined out of humanity by systems that never considered us in the first place.

That's not a technical failure. That's ableism, automated and deployed at scale.

The Educational Catch-22

Here's where the ableism gets particularly insidious: many neurodivergent people have spent years in education being told to "fix" our writing. Too informal? Make it more structured. Too tangential? Stay on topic. Too casual? Use proper grammar and punctuation. Too personal? Write more objectively.

We were taught to mask our natural communication patterns and adopt the "correct" way to write. For some of us, this meant suppressing special interest infodumps. For others, it meant learning to structure our thoughts in ways that don't come naturally but are acceptable in academic contexts. For many of us, it meant years of being told our authentic voice was wrong.

And we did it. We learned the rules. We code-switched. We masked our natural patterns and adopted neurotypical conventions because that's what was required to succeed in educational systems that were never designed with us in mind.

And now? Now that we've spent years learning to write the way we were told was right, AI detection tools are telling us we write like robots.

So what are we supposed to do? Write badly on purpose to prove we're human? Deliberately make grammatical errors? Throw in some inconsistent punctuation and tangential rambling to show we're fallible?

This is the ableist double bind made manifest: mask to be acceptable, then be accused of being artificial because you masked successfully. Be yourself and get punished for being "wrong." Adapt to neurotypical standards and get punished for being "too perfect." There's no winning condition here, which is rather the point of systemic ableism – it's designed so that disabled people can't win no matter what we do.

These tools are essentially incentivising bad writing. They're pushing writers to be less precise, less structured, less clear in order to pass as "authentically human." Which is a brilliant state of affairs, really – penalising clarity and precision in favour of mess, then calling it a security feature.

I'm being sarcastic, in case that wasn't clear. Which it probably was, because I'm precise about my language even when I'm being sarcastic. See the problem? I literally can't win here.

The Violence of Being Called Artificial

I want to sit with this for a moment because it's important: being told your natural communication is "robotic" or "artificial" is a form of dehumanisation. And dehumanisation is violence.

We can dress it up in technical language about false positives and algorithmic limitations, but what's actually happening is that disabled people are being told – by systems that will increasingly gate access to education, employment, and publishing – that our authentic expression is not recognisably human.

Do you understand how fucked up that is?

Autistic people have been fighting against dehumanisation for decades. We've been called "less human," "lacking in human qualities," "robotic," "emotionless" – all because we communicate and process the world differently. These narratives have been used to justify everything from exclusion from education to denial of autonomy to outright abuse.

And now we've got tools that encode that same dehumanisation into software, giving it the authority that comes with "objective" algorithmic assessment. When GPTZero tells a teacher that a student's essay is "possibly AI-generated," what it's really saying is "this doesn't match our narrow definition of human writing." And for neurodivergent students, that narrow definition excludes us by design.

This isn't just about hurt feelings or inconvenience. This is about systems that have the power to determine whether we're believed, whether we're trusted, whether we're given opportunities – systems that are fundamentally ableist in their construction and deployment.

The Practical Consequences (Because This Isn't Theoretical)

Let's talk about what this actually means in practice, because the consequences are real and they're happening now:

In academic contexts, neurodivergent students are already more likely to be accused of plagiarism because our writing doesn't match expected patterns. We're more likely to be questioned, more likely to face scrutiny, more likely to have our work doubted. Add AI detection tools to this mix, and we're facing another layer of suspicion baked into the system.

Take the case of Moira Olmsted, a college student with autism at Central Methodist University, who was falsely accused of using AI and received a zero on her assignment. Despite explaining her communication style, which is shaped by her neurodivergence, she received a disciplinary warning that if her work was flagged again, she would be treated as having committed plagiarism. Her professor told her that her writing "had certain features that made it sound like AI." Those features? The precise, structured way her autistic brain naturally organises language.

A 2023 Stanford study found that AI detectors were "near-perfect" when checking essays written by US-born eighth grade students, yet they flagged more than half of essays written by non-native English speakers as AI-generated. The researchers noted that the tools are biased against writing that deviates from "standard" patterns – which includes not just ESL students but neurodivergent writers who learned language differently.

How many autistic students are going to be falsely accused of using AI to write their essays because their natural writing style happens to be precise and structured? How many will be denied grades, denied degrees, denied opportunities because an algorithm decided their authentic voice looks artificial?

In employment contexts, If you're applying for jobs and your cover letter gets flagged as AI-generated, that's your application in the bin. Never mind that you wrote every word yourself, that you agonised over the phrasing, that you carefully structured your experience to tell a coherent story. The algorithm has decided you're probably a chatbot, and good luck challenging that when the rejection is automated.

With publishing and content creation, writers who depend on platforms that use AI detection to filter content risk having their work rejected or demonetised because their authentic voice reads as artificial. This is already happening. Neurodivergent writers are finding their work flagged, their income affected, their careers impacted – not because they did anything wrong, but because they communicate in ways that don't match neurotypical patterns.

And perhaps most insidiously, there's the pressure to self-censor, to write "badly" on purpose, to deliberately introduce errors and inconsistencies to prove you're human. To add another layer to the exhausting performance of masking that many neurodivergent people already do every day. We're not just modulating our behaviour, our speech patterns, our interests – now we have to modulate our writing style to prove we're not robots.

This is gatekeeping, pure and simple. And like all gatekeeping, it disproportionately affects people who are already marginalised.

The Technical Reality

Let's talk numbers for a moment, because the statistics are damning. TurnItIn originally claimed a false positive rate of less than 1% for documents with 20% or more AI writing. However, they later admitted to a "higher incidence of false positives" when less than 20% of AI writing is detected, though they never disclosed the exact rate. A Washington Post study found a false positive rate of 50% for TurnItIn, though with a smaller sample size.

The sentence-level false positive rate is approximately 4%, and more than half (54%) of false positive sentences are located right next to actual AI writing. Which means if you're a student who gets help understanding a concept and then writes about it in your own words, you're likely to be flagged simply because your writing appears near content that resembles an explanation you read.

When TurnItIn launched, that 1% false positive rate would have meant around 750 student papers incorrectly labelled as AI-generated out of Vanderbilt's 75,000 annual submissions alone. Vanderbilt recognised this problem and discontinued use of TurnItIn's AI detection feature entirely, along with several other major universities including Michigan State University and the University of Texas at Austin.

GPTZero claims better numbers – a 99% accuracy rate with a 1% false positive rate. But independent peer-reviewed studies found that GPTZero produces a 10% false-positive rate (wrongly labelling one in every ten human-written texts as AI-generated) and a 35% false-negative rate (failing to detect more than a third of AI-written material). So not only is it flagging innocent students, it's also missing actual AI use at a staggering rate. It's essentially useless, except for the harm it causes.

Tech website Futurism tested GPTZero and noted that based on its error rate, teachers relying on the tool would end up "falsely accusing nearly 20 percent of innocent students of academic misconduct."

And here's the kicker: a survey by the Center for Democracy & Technology found that about two-thirds of teachers use AI detection tools regularly. At that scale, even a 1-2% false positive rate adds up to thousands of students being falsely accused. But we're not looking at 1-2%. We're looking at 10-50% depending on the tool and the context.

That's not a margin of error. That's a systemic failure that's ruining lives.

What Actually Needs to Happen (Spoiler: It Won't)

The fundamental problem is that AI detection tools are built on ableist assumptions about what "human" writing looks like. And those assumptions are based on neurotypical patterns, with neurodivergence either ignored entirely or treated as statistical noise.

These tools aren't just failing to account for neurodiversity – they're actively pathologising it. They've taken the old prejudice that autistic people are "robotic" and encoded it into software that will be used to make decisions about access, opportunity, and trust.

What needs to change?

Detection tools need to account for neurodivergent writing patterns.

This means training data that includes autistic writers, ADHD writers, dyslexic writers, and all the other flavours of human neurodiversity. If your "human" baseline is exclusively neurotypical, your tool is ableist by design. But let's be honest: this won't happen, because that would require tech companies to care about disabled users beyond lip service.

We need to stop treating "different" as "suspicious."

Just because someone writes with precision and structure doesn't mean they're a language model. It might mean they're autistic. Or it might mean they're just a careful writer. Neither of these things is a problem. But again, this requires people to interrogate their assumptions about what "human" looks like, and that's apparently too much to ask.

Educational and professional contexts need to stop using these tools entirely.

Several major universities have already discontinued use of TurnItIn's AI detection feature, citing concerns over accuracy and the risk of harming students through false accusations. If you genuinely need to verify that a student or applicant wrote something themselves, AI detection tools aren't the answer – they're a discriminatory shortcut that will disproportionately harm disabled people. Oral exams, portfolio reviews, process documentation – these are all more reliable and more equitable approaches. But they require effort, which apparently is more than most institutions are willing to invest in fairness.

We need to interrogate what we mean by "authentic" writing.

If authenticity requires mess, inconsistency, and imprecision, we're defining it in a way that excludes a significant portion of the human population. Autistic people aren't being inauthentic when we write with structure and precision. That is our authentic voice. But this requires challenging deeply held assumptions about what "natural" human communication looks like, and that's uncomfortable work that most people would rather avoid.

The brutal truth? None of this is likely to happen. Because ableism is baked so deeply into how we think about communication, intelligence, and "normalcy" that most people don't even recognise it as ableism. They genuinely believe that neurotypical communication patterns are just "how humans write," and everything else is deviation that needs to be corrected or at least scrutinised.

AI detection tools didn't create this problem. They just automated it and gave it the false authority of algorithmic objectivity.

The Personal Bit (Because This Is, After All, Personal)

I've been told for as long as I can remember that I write "like a robot." I like structure. I like explaining things thoroughly. I like precision in language. I enjoy the rhythm of well-constructed sentences and the satisfaction of finding exactly the right word for what I'm trying to express. These aren't affectations or performative choices; they're how my autistic brain works.

And yet, the more I lean into that natural style, the more likely I am to be marked as "possibly AI-generated" by detection tools that have decided there's a correct way to be human.

It's exhausting, honestly. I spent years learning to mask other aspects of my autism – the stimming, the social difficulties, the sensory overwhelm. I got good at passing as neurotypical in many contexts, at least for short periods. But writing was always the one place where I could just be. Where my natural communication patterns were seen as a strength rather than a deficit. Where precision and structure were valued rather than pathologised.

And now that's being taken away too. Now even my writing – the most authentic, most genuinely me form of expression I have – is being flagged as insufficiently human.

This isn't just frustrating. It's genuinely upsetting in ways that are hard to articulate. Because what these tools are essentially saying is that there's no version of me that will be accepted as "naturally human." Mask, and you're too perfect. Don't mask, and you're too weird. There's no winning condition, which is rather the fucking point.

Where Does This Leave Us?

I don't have a neat conclusion for this one, because there isn't one. The problem is systemic, the solutions require people to confront deeply held ableist assumptions, and frankly, I'm not optimistic about any of this changing.

What I do know is this: neurodivergent people have always had to navigate a world that treats our natural ways of being as aberrant. We've developed elaborate masking strategies, learned to code-switch, figured out how to make our authentic selves palatable to systems that weren't designed with us in mind. We've fought for recognition, for acceptance, for the basic acknowledgement that different ways of being human are still ways of being human.

AI detection tools are just the latest iteration of that same fight. Another set of rules that exclude us by design. Another system that treats neurotypical experience as universal and everything else as suspicious. Another way in which our authentic expression gets flagged as "not quite right."

But here's the thing: I'm not a robot. Neither are you, if you're reading this and recognising yourself in these patterns. We're human – just human in ways that don't fit neatly into statistical models trained on neurotypical patterns. Human in ways that tech companies apparently couldn't be bothered to consider when building their detection systems.

And maybe, just maybe, instead of asking neurodivergent writers to change how we write to pass these ableist tests, we should be asking why the tests are failing to recognise such a significant portion of human writing in the first place.

Because if your AI detection tool can't tell the difference between a language model and an autistic person writing about their special interest, that's not actually a very good detection tool, is it?

That's a tool with an ableism problem. And ableism, unlike autism, can actually be fixed – if people cared enough to try.

But they won't. Because ableism is comfortable for the people who benefit from it. Because interrogating these systems requires acknowledging that maybe, just maybe, the problem isn't with neurodivergent people who write "too precisely." Maybe the problem is with systems that define "human" so narrowly that they exclude huge swathes of actual humans.

And that's an uncomfortable truth that most people would rather not face.

So we'll keep being flagged as artificial for being authentically ourselves. We'll keep fighting to be recognised as human in systems that were built to exclude us. We'll keep masking and code-switching and performing "human enough" for algorithms that will never see us as we are.

And tech companies will keep profiting from ableist tools that they'll never fix, because fixing them would require admitting they were broken in the first place.

Welcome to 2025. We've automated dehumanisation and called it progress.