In January 2025, when most school districts were still scrambling to write their first AI policies, the School District of Philadelphia released a comprehensive 42-page framework that would become a model for urban districts nationwide. A year later, we went back to Philadelphia to see how the policy is working in practice — what's succeeding, what needs adjustment, and what lessons other districts can learn.
The verdict? Philadelphia got more right than wrong, particularly in areas where many districts are still struggling. But implementation has revealed gaps that the original policy didn't anticipate, and some teachers feel the district moved too slowly in certain areas while moving too fast in others.
The Background: A City Divided
Philadelphia's school district serves over 200,000 students across 326 schools. It's one of the largest and most diverse districts in the country, with students from vastly different socioeconomic backgrounds. When ChatGPT hit mainstream awareness in late 2022, the district faced a familiar problem at an unfamiliar scale.
"We had schools where every student had a laptop and teachers were already experimenting with AI tools, and schools where half the kids didn't have reliable internet at home," recalls Dr. Angela Thompson, who led the district's AI policy task force. "We couldn't write a policy that only worked for one Philadelphia."
The district's approach was deliberate and, by tech-sector standards, slow. While some districts rushed out policies within weeks of ChatGPT's release — often with disastrous results — Philadelphia spent 18 months consulting with teachers, parents, students, technology experts, and academic integrity professionals before releasing their framework.
"Speed wasn't the goal. Getting it right was the goal. We knew whatever we released would affect hundreds of thousands of students and would be hard to walk back."
— Dr. Angela Thompson, Policy Task Force Lead

The Policy Framework: Three Pillars
Philadelphia's policy rests on three interconnected pillars: teacher autonomy, student support, and transparent detection. Understanding how these work together is key to understanding why the policy has been relatively successful.
Pillar 1: Teacher Autonomy
Unlike districts that mandate specific AI policies at the classroom level, Philadelphia gives teachers significant discretion. The district provides a framework and resources, but individual teachers decide:
- Whether to allow AI use on specific assignments
- How to teach AI literacy in their subject area
- What role AI detection plays in their assessment
- How to handle suspected AI use in their classroom
This autonomy comes with accountability — teachers must communicate their policies clearly to students and parents, and must align with certain district minimums (like teaching at least one lesson on AI literacy per semester). But within those boundaries, they have professional freedom.
Pillar 2: Student Support
The policy frames AI as an educational challenge to navigate, not a disciplinary problem to punish. When students are found to have used AI inappropriately, the default response is educational, not punitive:
- First incidents typically result in a conversation and assignment revision
- Students must complete an AI literacy module before resubmitting
- Repeated incidents escalate, but the focus remains on learning
- Seniors applying to college are treated more seriously, but still start with support
Pillar 3: Transparent Detection
Philadelphia uses Turnitin's AI detection district-wide but treats detection scores as starting points for inquiry, not proof of misconduct. The policy explicitly states that no student will face academic consequences based solely on an AI detection score.
"We've seen too many false positives to treat these tools as infallible," Dr. Thompson explains. "A high detection score opens a conversation. It doesn't close one."
Teacher Training: The Backbone of Success
Perhaps the most significant — and most resource-intensive — component of Philadelphia's approach is its teacher training program. The district has invested heavily in preparing educators to handle AI in their classrooms.
Required Training
All teachers complete a 6-hour foundational course covering:
- How generative AI works (at a practical level)
- Capabilities and limitations of AI detection tools
- Designing AI-resistant assignments
- Having conversations with students about AI and integrity
- The district's policy framework and their autonomy within it
Ongoing Support
Beyond initial training, the district provides:
- Monthly optional workshops on AI developments and teaching strategies
- A digital resource library with sample policies, assignments, and lesson plans
- School-based AI liaisons who can help individual teachers
- A hotline for questions about specific situations
"The training was actually useful," says Marcus Williams, an 11th-grade English teacher at Overbrook High School. "I've been to a lot of PD that felt like a waste of time. This one gave me tools I actually use."

The Detection Approach: Cautious by Design
Philadelphia's approach to AI detection is notably more cautious than many districts. While they've licensed Turnitin's AI detection for all schools, the policy includes significant guardrails:
Detection Score Thresholds
The district provides guidance on interpreting scores:
- Below 20%: Generally treated as human-written unless other evidence suggests otherwise
- 20-50%: Warrants a conversation with the student but not automatic escalation
- 50-80%: Requires documentation and typically involves a more formal review
- Above 80%: Strong evidence, but still requires additional verification
Mandatory Human Review
No student can face consequences for AI use without human review of the specific case. This review must consider:
- The student's writing history and development
- The assignment context and instructions
- Any process documentation (drafts, outlines, etc.)
- The student's explanation
- Known false positive patterns (ESL students, formulaic writing, etc.)
What's Working: Successes After One Year
After a year of implementation, several aspects of Philadelphia's approach stand out as particularly successful:
Teacher Buy-In
Surveys show 73% of teachers rate the policy positively, an unusually high number for any district initiative. Teachers cite autonomy and training as the main reasons.
Reduced Conflict
Parent complaints about AI-related discipline have been relatively low. The educational approach to first incidents seems to defuse situations that might otherwise escalate.
False Positive Protection
The district's appeals process has reviewed 847 contested AI detection cases. Of those, 312 (37%) were determined to be false positives or inconclusive — students who would have faced consequences under stricter automated systems.
Adaptive Teaching
Teachers report redesigning assignments at higher rates than before the policy. Many say the training gave them ideas for making their assessments more AI-resistant and, incidentally, more engaging.
Ongoing Challenges
The policy isn't perfect. Teachers and administrators identified several areas needing attention:
Inconsistency Between Schools
Teacher autonomy has a downside: students in different classrooms can face very different expectations. Some teachers embrace AI as a learning tool; others prohibit it entirely. This inconsistency frustrates some students and parents.
Equity Gaps in Detection
Despite safeguards, ESL students are still flagged at higher rates than native speakers. The district is working on additional guidance for these cases, but the underlying technology limitation remains.
Resource Strain
The policy's emphasis on human review and appeals takes time — time that teachers and administrators don't always have. Some schools report backlogs in processing contested cases.
Keeping Up with Technology
AI capabilities continue to evolve faster than policy can adapt. Some teachers feel the training is already becoming outdated.
Lessons for Other Districts
What can other districts learn from Philadelphia's experience? We asked Dr. Thompson and several teachers what they'd tell districts just starting their AI policy journey:
- Invest in training first. "You can have the best policy in the world, but if teachers don't understand AI and don't feel prepared, it won't work," says Dr. Thompson.
- Trust your teachers. Top-down mandates without room for professional judgment breed resentment and workarounds. Give teachers a framework, then let them teach.
- Build in appeals from day one. You will have false positives. Having a clear, fair process for handling them protects students and the district.
- Frame it educationally, not punitively. Students are learning to navigate a new technology. Treating AI use as a learning opportunity rather than a crime creates better outcomes.
- Plan for iteration. Your first policy won't be perfect. Build in review cycles and be willing to adapt.
Conclusion
Philadelphia's AI policy isn't perfect — no policy could be, given how rapidly this technology is evolving. But by centering teacher autonomy, investing in training, and treating detection scores as starting points rather than verdicts, the district has created a framework that's working reasonably well for a remarkably complex system.
The next year will bring new challenges: more sophisticated AI tools, evolving detection technology, and continued debates about the role of AI in education. Philadelphia's willingness to adapt, combined with its commitment to protecting both academic integrity and student welfare, positions it well to meet those challenges.
Other districts would do well to study Philadelphia's approach — not to copy it exactly, but to understand the principles that make it work: trust in teachers, support for students, and healthy skepticism toward any technology that claims to solve complex human problems with simple scores.
Is your district developing an AI policy? We'd like to hear about it. Contact us atour contact page.
