From education to employment

Fast, Safe and Accountable: AI in Education Cannot Wait for Perfect Conditions

Danny O'Meara Exclusive

The DfE’s latest guidance officially opens the door to using artificial intelligence (AI) to support tasks such as low-stakes test marking and writing routine letters to parents. It’s a practical step in the right direction, but one that comes with very real risks if we aren’t equipped to move both fast and safely in the FE sector.

The pace of change in AI is relentless. Most of our learners are already using AI tools to summarise notes, draft assignments, and prepare for assessments. Recent surveys paint a stark picture: 92% of students have used generative AI tools, with 88% using them for assessments, and 18% directly copying AI-generated text into their work. Among 16-18 year olds specifically, usage rates climb even higher, with many treating AI as just another study tool. Some use it responsibly, others misuse it. FE educators are playing catch-up, experimenting without proper frameworks, sometimes unknowingly breaching GDPR. Regardless, the numbers are sobering: 74% of UK teachers have received no formal AI training, and 75% of leaders say they feel like they lack AI expertise. The government’s training materials encourage educators to use AI to automate routine tasks and reclaim valuable face-to-face time with students. This is undoubtedly positive, who would argue against reducing administrative overload?

But the truth is that many FE practitioners aren’t ready to make these decisions safely.

The Hidden Risk: Data Protection and AI

Let’s be clear: AI tools don’t operate in a vacuum. When an FE lecturer pastes student coursework, apprenticeship assessments, or pastoral notes into a generative AI tool, they risk committing a data breach. Just this week, a US court ordered OpenAI to preserve all chats, including those that would have been deleted. This is a nightmare for people who rely on AI tools for their work, as this is a GDPR breach waiting to happen. ChatGPT used to inform users that it would retain their conversations by default for training purposes, although users could opt out. However, as this week’s court ruling shows, data that users thought might be deleted will be preserved indefinitely. The DfE guidance explicitly warns that personal data shouldn’t be used in AI tools unless absolutely necessary, with proper safeguards in place. Are we 100% sure this message has reached FE classrooms and workshops?

The GDPR implications are serious for colleges (possibly being fined up to 4% of their annual turnover)! Education was the second-worst-hit sector for data breaches in 2021, with incidents often involving student data. DfE revealed that parents and adult learners are deeply concerned about data security, they want clear information about data collection, access, and storage duration. If FE providers don’t implement robust policies now, well-meaning staff will use AI tools in unlawful ways. The consequences could be severe, including reputational damage, legal exposure, and a broken trust with learners and their families. The Information Commissioner’s Office (ICO) has already reprimanded educational institutions over data protection failures with new technologies.

Guidance Is Welcome, But FE Implementation Gaps Remain

The DfE’s guidance is comprehensive, four detailed modules plus a leadership toolkit covering everything from AI basics to complex data protection requirements. It includes frameworks for better outputs, educator case studies, and clear protocols. However, there is still a significant gap between comprehensive national guidance and day-to-day implementation in FE, where a recent EEF studies show that AI can reduce educator workload by over a third. Yet, many practitioners lack the confidence to implement this safely.

In FE, we’re dealing with adult learners, apprentices, and mature students who are often more tech-savvy and autonomous than younger pupils. They’re already using AI tools extensively, making the need for clear institutional policies even more urgent. The government’s research reveals the challenge: there are concerns about teaching quality, overreliance leading to skill loss, and whether AI suits vocational and academic pathways. Even with excellent guidance, cultural and practical barriers in FE remain significant. The ICO advises organisations to conduct Data Protection Impact Assessments and clarify with vendors how personal data will be used for model training, highlighting the need for granular institutional approaches that go beyond national guidance, especially crucial for FE providers managing diverse student populations and employer partnerships.

A Call to FE Leadership

What’s needed now is bold, local leadership that translates comprehensive guidance into actionable institutional policies.

Every FE college and sixth form should urgently develop internal AI policies covering:

  • Acceptable use: Which tools are approved for staff and student use, and which are not?
  • Data handling: What learner information can and cannot be shared with external platforms?
  • Transparency: How will students and employers know when AI is used in assessments or communication?
  • Oversight: Who audits and ensures quality AI use across vocational and academic programmes?

The DfE framework provides the foundation, FE institutions must comply with age restrictions (though most learners are adults), integrate AI considerations into safeguarding policies, and meet product safety expectations. Colleges can choose their AI use cases, but must comply with statutory obligations and maintain employer confidence in qualifications.

This can’t be left to individual lecturers or department heads. The stakes are too high, staff pressure is too significant, and employer expectations are too important. Institutional policies, shaped by digital leads, safeguarding teams, curriculum managers, and legal experts, are essential.

AI Is Here in FE, Whether We’re Ready or Not

Let’s not wait for perfect conditions. AI is reshaping post-16 education inside and outside our campuses. The DfE research found that “opinions on AI tools are not yet fixed” and that trust levels fluctuate as people encounter new information. This suggests that proactive engagement and transparent implementation will be crucial for building confidence among learners, staff, and employer partners.

FE leaders can’t sit back while individual practitioners navigate this alone. We outpace policy updates 9 times out of 10. This time, we must build the plane while flying it, ensuring the wings are secure, using the comprehensive guidance now available as our engineering manual. Getting this right won’t just protect FE institutions from risk. It will model digital responsibility for learners entering an AI-transformed workforce through our vocational programmes and apprenticeships. Teaching them wise tool use starts with using tools wisely ourselves. The time for caution has passed.

But the time for care, informed by comprehensive guidance and robust institutional frameworks tailored to the unique needs of FE, is right now!

By Danny O’Meara, Operations Manager, FE News


Related Articles

Responses