MAKING SUCCESS STORIES HAPPEN

Our Morgan Philips
surveys, reports and e-books

Discover our surveys, reports and e-books. These documents provide accurate and useful information for talent, employers and decision-makers. They are essential tools for navigating the job market and optimising human resources and career management.

AI Literacy for HR: The Compliance Clock Is Ticking—Will Your Workforce Be Ready?

AI Literacy for HR: The Compliance Clock Is Ticking—Will Your Workforce Be Ready?

AI is becoming a core workplace skill: those who can use it shape decisions, those who can’t fall behind. This webinar highlights a clear risk—under the EU AI Act, if employees use AI (including shadow AI), obligations apply, making urgent AI literacy, guardrails, and continuous learning essential for HR.

08/05/2026 Back to all articles

 

Shadow AI Is Already Inside Your Company (and Article 4 Makes It HR-Relevant) 

The session opened with a simple—but unsettling—story: an employee uses ChatGPT to draft customer proposals in minutes rather than hours. Productivity soars. But the account is private, data travels outside the organization, and confidential details (names, pricing, contract language) can inadvertently become part of external model training. The real sting: this isn’t hypothetical. It’s happening now. 

The keynote speaker Joachim Riegel framed the reality HR leaders must face: many companies believe they “don’t use AI” because no tool has been officially approved. Yet employees are using it anyway—a phenomenon now widely described as shadow AI. In the webinar, a striking statistic was cited: in 42% of German companies, at least one employee uses AI daily without anyone knowing.

This is where the urgency sharpens. Under Article 4 of the EU AI Act (in force since 25 February), the question is not “Did you approve AI?” but “Are your people using AI in a business context—and do they know how to use it safely?” A written prohibition that nobody controls is not a shield. If an untrained user triggers harm—think data leaks or poor decisions—liability sits with the company, and in severe cases, leadership exposure can escalate. 

The core insight for HR: AI risk is often a learning gap, not malicious intent. Employees typically want to do a good job; they simply lack a clear framework. That makes AI literacy a classic HR mandate: define expected behaviours, create learning pathways, and embed responsibility into daily work—before an incident forces the organization to learn the hard way. 

AI Literacy Is Not Tool Training: It’s Judgment, Governance, and a Living Learning System 

A standout learning point was the distinction between using AI and being literate in AI. Tool tutorials, prompt tips, and one-off video trainings may improve output—but they don’t meet the real organizational need. AI literacy, as presented, has three practical layers: 

  1. Understand the fundamentals (beyond tools) 
    HR doesn’t need data scientists—but it must cultivate baseline competence: what generative AI is, how LLMs work, what hallucinations and bias look like, and how to interpret risk categories. People who understand these can judge; people who only “operate” tools cannot. 

  1. Act responsibly (role-level clarity) 
    The webinar emphasized the everyday questions that prevent incidents: Which data can go into which system? Why are some tools allowed and others not? What does “human-in-the-loop” mean at my desk? This is governance translated into behaviour—where policy becomes practice. 

  1. Stay current (because AI changes faster than your training cycle) 
    AI evolves at a pace that breaks traditional L&D rhythms. The session offered a vivid example HR leaders should not ignore: prompt injection in recruitment—hidden instructions inside a CV (e.g., white text on white background) that manipulates an AI screening system into rating a candidate as “excellent.” No hacking required—just a text editor and a clever trick. The message: if HR isn’t actively tracking new patterns of misuse, it will be surprised; if it is, it can design safeguards. 

To make AI literacy sustainable (and audit-proof), we propose three implementable steps: 

  • Quarterly, role-specific training tied to real work and real risk 

  • Peer exchange formats where teams share what worked, what failed, and what changed 

  • AI champions in each department—not as hype-driven fans, but as critical observers who translate new developments into local implications 

Finally, the webinar dismantled the “silo reflex.” AI cannot be owned by one function: IT provides secure tools, Legal covers frameworks and GDPR alignment, HR builds literacy and behavior change, and leadership sets tone and accountability. Even a secure environment like Copilot does not prevent wrong decisions made from AI output—literacy does.

Frequently Asked Questions

Isn’t AI literacy a responsibility of the legal department?

It’s cross-functional: Legal, IT, HR and leaders must act together—no in silos. 

Is August 2026 really a hard deadline for businesses, or can we stay calm?

Enforcement is approaching across Europe; waiting increases your risk and urgency is rising. 

What happens if an untrained employee causes a data leak?

The company can face serious consequences—training and governance are essential. In some cases, CFO or CEO are personally liable for the leaks if no specific training has been offered internally. 

How can HR stay up to date when AI changes weekly?

Build a rhythm: weekly curated sources + quarterly training + internal peer exchange.

© 2026 Morgan Philips Group SA
All rights reserved