Working Educators cut its teeth on standardized testing opt-out campaigns. We learned that when systems are flawed, educators have both the right and the responsibility to push back. That experience shapes how we approach AI detection today: not as opponents of accountability, but as advocates for fair processes.
Every AI detection tool generates false positives. When it happens to your student, here's what to do.
Before the Flag: Build Your Foundation
The best time to prepare for a false positive is before it happens. These practices protect both you and your students:
Collect baseline writing samples
Have students write in class during the first week of the term, without AI access. Keep these samples. They're your evidence of what the student's voice actually sounds like.
Require process documentation
Ask students to submit notes, outlines, or drafts with major assignments. A student who can show their thinking process has evidence of authentic work.
Know your tool's limitations
Read the fine print on whatever detection tool your school uses. Most explicitly state they should not be used as sole evidence. That statement matters if you need to appeal.
When a Student Is Flagged
Step 1: Don't Jump to Conclusions
A high AI detection score is not proof of cheating. It's a signal that the tool's algorithm detected patterns it associates with AI generation. Those patterns can appear in:
- Writing by non-native English speakers
- Heavily edited or polished work
- Writing that follows academic formulas closely
- Work produced with accessibility tools
- Content on topics heavily covered in AI training data
Before taking any action, ask yourself: What do I actually know about how this student writes?
Step 2: Have a Conversation First
Talk to the student before making any accusations. This is both good pedagogy and legal protection.
Questions to Ask
- "Walk me through how you wrote this paper."
- "What was your thesis and how did you arrive at it?"
- "Tell me about this section — why did you structure it this way?"
- "What sources did you find most useful? What did you learn from them?"
- "Was there anything about this assignment that was particularly hard?"
A student who wrote their own work can discuss it. They remember their struggles and choices. A student who submitted AI output often can't explain the thinking behind specific decisions.
Step 3: Gather Corroborating Evidence
If you're convinced the student did write their own work:
- Compare the flagged paper to their baseline writing samples
- Note consistencies in voice, errors, or writing habits
- Document the conversation you had with the student
- Check if they submitted process documentation (drafts, notes)
- Consider their classroom participation — does this paper reflect ideas they've expressed verbally?
If you're convinced the student did use AI inappropriately:
- Document the detection results (screenshot, save report)
- Document the conversation and what raised concerns
- Follow your school's academic integrity process
- Ensure the student has the opportunity to respond
Appealing a False Positive
If your school or district has made an AI detection finding you believe is wrong, here's how to push back:
Know the Policy
Most schools haven't updated academic integrity policies to address AI detection specifically. Look for language about:
- What counts as sufficient evidence for academic integrity violations
- Student rights to appeal or respond to accusations
- Whether detection tools are explicitly addressed (and how)
If the policy is vague, that ambiguity can work in the student's favor.
Document Everything
Your documentation should include:
- The detection score and full report
- Baseline writing samples for comparison
- Any process documentation (drafts, notes, outlines)
- Summary of your conversation with the student
- Your professional assessment of the student's work and capabilities
- Research on false positive rates for the tool used
- If applicable: the student's ESL status or use of accessibility tools
Use the Vendor's Own Language
Every major AI detection tool includes disclaimers. Turnitin's documentation states scores should "inform" not "replace" educator judgment. GPTZero says its results are "not definitive." These statements matter in appeals.
Quote the tool's own limitations. Ask whether the school is using the tool in ways the vendor itself says are inappropriate.
Escalate If Necessary
If internal appeals fail and you believe a student is being treated unjustly:
- Contact your union representative (if applicable)
- Request the appeal be heard by someone outside the department
- Ask whether the school has consulted legal counsel on AI detection standards
- For serious cases involving ESL students or students of color, civil rights implications may apply
The Bigger Picture
Pushing back on false positives isn't just about individual cases. It's about:
- Establishing precedent. When teachers successfully challenge false positives, it creates institutional knowledge about detection limitations.
- Protecting vulnerable students. ESL students and students from marginalized communities are disproportionately flagged. Challenging errors is equity work.
- Improving policy. Schools that see repeated false positives may revise their approach to detection.
- Maintaining trust. Students who see teachers defending them against algorithmic errors learn that adults will fight for fairness.
Working Educators has a long history of pushing back against flawed systems. We did it with standardized testing. We're doing it now with AI detection. The tools change; the work of advocating for students doesn't.