Equity in AI Detection

An Open Letter to EdTech Companies on AI Detection Bias

To Turnitin, GPTZero, Copyleaks, and the AI detection industry: We need to talk about who your tools are hurting.

January 2026

To the leadership of Turnitin, GPTZero, Copyleaks, and the broader AI detection industry:

We are teachers. We work in classrooms across the country, and we see firsthand how your products affect our students. We're writing because we're concerned — not about AI detection in principle, but about who your tools are disproportionately flagging.

The Problem

Multiple studies have documented what we see in our classrooms: AI detection tools have higher false positive rates for English language learners and students who speak African American Vernacular English (AAVE) or other non-dominant dialects.

What the Research Shows

  • Stanford researchers found GPT detectors flagged non-native English writing as AI-generated at significantly higher rates than native writing
  • Studies have shown that "simpler" writing patterns — more common in ESL writing — trigger false positives more frequently
  • Teachers report that students with certain writing styles are flagged repeatedly for work they genuinely produced

This isn't a theoretical concern. These are real students — students who already face barriers in our education system — being accused of cheating for writing that is authentically their own.

What We're Asking

1. Publish Your Bias Audits

You've conducted internal testing. Make it public. Share false positive rates broken down by student demographic, English proficiency, and writing style. Let us see the data.

2. Include Affected Communities in Development

Your training data and testing protocols should include diverse writing samples from ESL students, students who use AAVE, and students from varied socioeconomic backgrounds. If your tools don't work fairly for everyone, they shouldn't be deployed.

3. Strengthen Your Warnings

Current disclaimers about accuracy are buried in documentation. Make them prominent. Every detection result should clearly state that the score is probabilistic, not conclusive, and that false positives are real.

4. Create Accountability Mechanisms

When your tool flags a student incorrectly and they face consequences, what happens? There should be a process for reporting false positives, tracking patterns, and improving the system.

The Stakes

AI detection accusations can derail academic careers. Students have been failed, suspended, and expelled based on detection scores. For students who are already marginalized — immigrants, students of color, students from low-income backgrounds — a false accusation adds one more barrier to an already difficult path.

We understand you're trying to solve a real problem. Academic integrity matters. But the solution cannot be tools that disproportionately harm the students who already face the most obstacles.

"The question isn't whether AI detection is useful. It's whether it's worth the cost — and right now, the cost is being paid by the students who can least afford it."

A Path Forward

We're not calling for the elimination of AI detection tools. We're calling for accountability, transparency, and a commitment to equity. The same rigor you apply to detecting AI should be applied to ensuring your tools don't discriminate.

Teachers are your customers. Students are affected by your products. We're asking you to take our concerns seriously.

We welcome dialogue. We're ready to share what we see in our classrooms, connect you with affected students and families, and work toward better solutions. But we need you to engage.

Signed,
The Working Educators Team
and educators across the country

Add Your Voice

Are you an educator who has witnessed detection bias in your classroom? We're collecting stories and will share aggregated findings with detection companies.

Share Your Experience