When a teacher hands back a paper, you expect a critique of your arguments or perhaps a sigh at your syntax. Instead, I recently received two assignments back with a status code I didn’t know I was supposed to avoid: "High AI Probability."
I wrote them myself. Every character, every line break, every specific structural pivot.
And here’s the kicker: I’m currently just under three weeks post-brain surgery. My "processing power" isn't exactly at peak capacity, and I don't have the luxury of infinite revisions. It took me an hour and a half of intense, focused effort just to get this post into a coherent state. To have that kind of manual, grueling labor dismissed by a "probability score" feels less like an academic check and more like a personal glitch in the system.
The Ouroboros Problem
There is something deeply recursive — and frankly, stupid — about the current academic landscape. We are using AI tools to analyze writing to see if it’s AI-written, based on patterns that were trained on human writing in the first place. It is the snake eating its own tail until there’s nothing left but a statistical void.
These detectors don’t "know" what human writing is. They haven't read Orwell or Satrapi and felt something. They just know what it looks like when text is neat, predictable, and devoid of the "noise" they associate with humanity. The system feeds on its own assumptions, slapping students on the wrist based on a probabilistic guess that happens to be delivered in a very official-looking UI.
LLMs: Just Fancy Autocorrect
We need to stop using the term "Artificial Intelligence" in this context. It’s too heavy; it implies a ghost in the machine. It implies something more similar to Caine or AM rather than... machine learning. In my head, I’ve started referring to them as "fancy autocorrect."
As a coder, the abstraction isn't that complex. These aren't brains; they are mathematical engines. When you give it a prompt, it isn't mulling over the philosophical implications of your question. It is calculating the statistical likelihood of the next token. If I write "The dog sat on the...", it predicts "mat" because that’s the weighted average of its training data.
The irony is that these models are trained to mimic clear, logical human prose. So, when a human actually writes with clarity and structure, the system flags it as "suspicious." We have reached a point where being a good writer is functionally indistinguishable from being a machine.
In dev terms, this is backwards. If a function produces the correct output cleanly and consistently, you don't suspect it's a fraud—you celebrate the fact that it works. Academia is doing the opposite: it’s looking at clean code and demanding more bugs to prove a human wrote it.
False Positives are Features, Not Bugs
If you write with precision, you are more likely to get flagged. That isn’t paranoia; it’s the logical conclusion of how these tools operate.
Detectors look for low "perplexity" — a fancy term for text that follows common patterns. But good academic writing is structured. It is consistent. It is designed to be readable. By rewarding disorganized writing and punishing genuine structure, the system is actively incentivizing mediocrity. If you plan your work and edit out the fluff, you inadvertently mimic the statistical footprint of an LLM. That’s not a victory for academic integrity; it’s a failure of systems design.
Writing While Autistic: The Erasure of the Logical Voice
This hits differently as an autistic person. I don’t think in vague fogs; I think in grids. My natural communication style is structured and logical because that is how my brain handles data.
When a detector flags my writing as "too structured," it’s telling me that my natural baseline is "incorrect." It’s that quiet, institutional friction where a system is built without ever considering that "human" doesn't have a single, messy template. This isn't just about me, either. It’s a tax on neurodivergent students, non-native speakers, and anyone who prefers a semicolon to a rambling sentence. There is something fundamentally broken when clarity is treated as a confession of fraud.
The Death of the Good Faith Assumption
We are moving from a system of trust to a system of automated suspicion. It’s "guilty until proven human." The burden of proof has shifted. A teacher sees a percentage on a dashboard, and suddenly a student—who might be struggling with recovery or neurodivergence—has to provide a digital paper trail to prove they exist. A statistical guess now carries more weight than a student’s record.
The most damaging part is the psychological shift. You start writing for the machine. You start wondering if a sentence is "too perfect" and whether you should add a typo just to satisfy the algorithm. That isn't learning; that's survival through intentional degradation.
The Debugging Process
I don’t have a patch for this. My current workaround is to save every draft, every version history, and every stray note like I’m preparing for a legal defense. It’s an exhausting amount of overhead for someone who is already navigating a post-op recovery where energy is a finite resource.
But we have to stop accepting this as the "new normal." If the benchmark for humanity is being messy and inconsistent, we’ve lost the plot. People can be logical without being bots. Writing can be polished without being synthetic.
If a "fancy autocorrect detector" thinks my writing is AI-generated, it doesn’t mean I’m a fraud. It means the detector is too narrow to understand what a human is actually capable of—even one who is currently drafting through a post-surgery fog. Maybe it’s time we stopped debugging the students and started questioning the tools.