Why AI Is a Big Problem for School Cybersecurity
Artificial intelligence is transforming education technology and expanding cybersecurity risks. In the Education Week article, experts examine how AI adoption is creating new challenges for schools. Read the article to see how AI tools are expanding the cybersecurity attack surface in education, why automated threats and phishing attacks are becoming more sophisticated, and what security considerations schools should evaluate as AI adoption grows.
Frequently Asked Questions
Why are AI-powered cyberattacks such a concern for K-12 schools now?
AI is reshaping the cyber risk landscape for K-12 because it makes attacks faster, more convincing, and easier to launch—even for less-skilled criminals.
Schools were already attractive targets before AI:
- They hold large volumes of sensitive student data (including Social Security numbers) that can be sold at a premium on the dark web because children typically have clean credit histories and there are few safeguards to alert parents when a child’s identity is misused.
- They manage substantial financial transactions and store staff personal data, making them appealing for both data theft and ransomware.
- They often have fewer resources and smaller cybersecurity budgets than sectors like banking or healthcare, which makes them comparatively easier to breach.
AI is now amplifying these existing risks:
- Generative AI tools can write highly polished phishing emails in fluent American English, removing the spelling and grammar red flags staff used to rely on.
- AI can mimic writing styles, so an email can convincingly appear to come from a superintendent or principal.
- Deepfake tools can clone voices and appearances, enabling fake phone or video calls that pressure staff into making urgent payments or sharing credentials.
- AI systems can quickly scan public information—such as board minutes, contracts, and staff directories—to map out who controls budgets, which vendors are used, and how to tailor attacks.
- Emerging “agentic” AI tools can automate complex tasks, lowering the barrier for a single attacker to execute what previously required an organized ransomware group.
At the same time, federal support for school cybersecurity has been reduced, including cuts to programs like MS-ISAC funding and the closure of the U.S. Department of Education’s Office of Educational Technology. This combination—stronger AI-enabled threats and weaker centralized support—explains why many district technology leaders see AI-driven cyberattacks as a growing concern rather than a distant risk.
How exactly are attackers using AI against school districts?
Attackers are using widely available AI tools to make their tactics more targeted, believable, and scalable. District leaders and staff should be prepared for several specific patterns:
1. More convincing phishing emails
- AI can generate emails in natural, idiomatic American English, removing the obvious spelling and grammar errors that used to signal a scam.
- Attackers can prompt AI to “write like” a specific person (for example, the superintendent), making messages feel familiar and trustworthy.
- These emails often ask staff to click a link, open an attachment, or log in to a fake portal, which can install malware or capture credentials.
2. Impersonation through deepfakes
- AI voice cloning can produce phone calls that sound like a superintendent, principal, or business office leader.
- Attackers may demand an “urgent” payment to a vendor or request sensitive information, routing money or data to their own accounts.
- Video deepfakes are becoming easier to create, meaning staff may eventually see realistic but fake video messages asking them to act quickly.
3. Targeted social engineering
- AI can quickly scan public records—budgets, board minutes, staff directories, and even public email archives—to map out how a district operates.
- This allows attackers to:
- Identify who approves payments and signs contracts.
- Learn which vendors the district uses.
- Tailor messages to local initiatives, timelines, and terminology.
4. Automated, scalable attacks
- Newer AI tools with “agentic” capabilities can perform tasks autonomously online.
- Instead of a skilled team manually crafting and sending attacks, a single person can:
- Generate targeted phishing campaigns.
- Test different messages and refine them.
- Automate parts of the intrusion process.
Because of these tactics, staff can no longer rely on simple cues like poor grammar or unfamiliar senders. Verification processes, training, and clear internal protocols become essential to counter AI-enhanced social engineering.
What can school districts realistically do to defend against AI-driven cyber threats?
Even with tight budgets, districts can take practical, staged steps to strengthen defenses against AI-enabled attacks. The focus is on getting the basics right, building habits, and using available partnerships.
1. Double down on cybersecurity fundamentals
- Multi-factor authentication (MFA): Require MFA for staff email, SIS, finance systems, and any remote access.
- Strong passwords: Enforce length and complexity requirements and discourage password reuse across systems.
- Regular updates: Keep operating systems, browsers, and key applications patched and current.
These “blocking and tackling” basics significantly reduce the impact of many attacks, including those powered by AI.
2. Train staff to recognize and respond to modern threats
- Phishing simulations: Use software that sends fake phishing emails to staff. When someone clicks, direct them to a short training video explaining what to look for next time.
- Scenario-based training: Emphasize that no legitimate financial transaction should be completed solely on the basis of an urgent email or call—even if it appears to come from the superintendent.
- Clear escalation paths: Make it easy and safe for staff to pause, verify, and report suspicious messages without fear of blame.
3. Introduce verification processes for high-risk actions
- Code words or secondary checks: Implement simple verification steps (for example, a shared code word or a required call-back to a known number) for:
- Large or unusual payments
- Changes to vendor banking details
- Requests for sensitive data
- “No immediate payment” rule: Establish a policy that no payment is processed solely on the basis of a single urgent communication.
4. Leverage collaboration and shared services
- Join information-sharing networks: Explore membership in MS-ISAC, which now operates on a sliding fee scale. In some states (including Alaska, Connecticut, Kansas, Maine, Mississippi, New Jersey, Oregon, Texas, and Vermont), districts can access MS-ISAC services at no additional cost.
- Regional collaboration: Partner with neighboring districts or state-level CoSN chapters to share best practices, templates, and vendor evaluations.
- Tabletop exercises: Run low-cost tabletop drills with district leadership to walk through how you would detect, respond to, and recover from an attack.
5. Prioritize cybersecurity in budget and risk discussions
- Treat cybersecurity as an ongoing operational need, not a one-time project.
- When budgets are tight, evaluate which tools and services directly reduce risk (for example, MFA, phishing training, endpoint protection) and protect those lines where possible.
By combining strong fundamentals, consistent staff training, clear verification processes, and collaboration with external partners, districts can meaningfully reduce their exposure to AI-powered cyber threats—even without large new investments.


