BUT WHAT IF I ONLY USE CHATGPT? Update On New AI Literacy Guidelines by EU AI Office

BUT WHAT IF I ONLY USE CHATGPT? Update On New AI Literacy Guidelines by EU AI Office

As part of the ongoing developments in the EU’s artificial intelligence regulation, we’d like to bring to your attention an update that may impact how your company uses AI tools, including off-the-shelf platforms like ChatGPT, Gemini, or Claude. 
 

A SHORT RECAP [2024]

As part of a broader AI strategy, the EU adopted in 2024 a first of its kind Artificial Intelligence Act (“AIA”), introducing a comprehensive risk-based framework for AI regulation. The AIA includes varying degrees of legal obligations imposed along the AI supply chain, on natural persons and companies, according to the classification of their engagement with AI systems and tools. 

The AIA distinguishes between Providers of General-Purpose Models – OpenAI, Anthropic etc., foundation models like ChatGPT, Claude, or Gemini etc., and Deployers – companies or individuals using those models (see definitions in Art. 3 of AIA). While most obligations are imposed on Providers, the recent publication by the newly established EU AI Office, focuses also on end-users of AI tools marking a clear shift toward defining expectations for responsible use by non-providers as well.

To support this goal, the AI Office has issued non-binding Guidelines on AI Literacy. These guidelines signal what may become future baseline expectations for responsible AI use—even by those “just using ChatGPT.”

 

RECENT DEVELOPMENT [APRIL 2025]

The guidelines emphasize that AI literacy includes not only knowledge but also skills and attitudes, placing importance on ethical use, critical thinking, and transparency in deployment contexts, and provide detailed expectations, clarifying that:

  • AI literacy is expected at all organizational levels, not just among developers, including casual and professional users of AI tools (e.g., ChatGPT, GitHub and Copilot).
  • Competence encompasses understanding of AI functionality, recognizing limitations, and critically evaluating outputs.
  • Deployers are expected to promote internal awarenessof AI-related risks such as bias, explainability, and data privacy.
  • AI literacy obligations apply to all AI systems, regardless of their risk classification, meaning even low-risk tools like generative AI chatbots are within scope.
  • Training should be tailored to the technical knowledge, experience, and roles of staff, as well as the context in which AI systems are used.
  • Organizations should document their AI literacy initiatives, maintaining records of training activities to demonstrate compliance (see examples from various organizations to guide the development of effective AI literacy programs here).

     

WHAT DOES THIS MEAN FOR YOU? 

The guidelines serve as a proactive measure, encouraging organizations to integrate AI literacy (i.e., internal AI Governance Policy) into their governance frameworks and prepare for future enforcement actions.

Even if your company does not develop AI models, these guidelines reflect rising expectations from companies operating in or with the EU. Accordingly, companies should consider:

  • Implementing training, and ensuring that the employees, contractors and service providers have the technical skills, knowledge and abilities to use AI. Where AI tools are used to provide services, such third parties should be contractually obligated to meet AI literacy requirements. All employees and contractors must be made aware of the Company’s AI governance policy and complete any required training. The users must understand the risks of using AI tools. The EU Commission offers various learning kits and training sessions. 
  • Analyzing the company’s position as a deployer or provider, analyzing the classification of the risk. organisations should consider their role (being providers or deployers of AI systems) as well as the risks associated to the AI systems they provide and/or deploy. According to this, organizations should adapt their AI literacy approach. For example, if the AI systems of the organization are high-risk, according to Chapter III of the AI Act, additional measures might be relevant to ensure employees are aware of how to deal with the given AI systems and avoid and/or mitigate their risks. 
  • Ensure general understanding of AI within the organization- what is AI? How does it work? How is it used in the organization and the opportunities v. the risks. What do employees need to know when dealing with such AI. 
  • What are the risks they need to be aware of and do they need to be aware of mitigation system?
  • Updating internal policies to reflect AI-related responsibilities by user type (developer, deployer, general user) and creating an AI governance policy reflecting the requirements under the AIA. 
  • Reviewing use cases of tools like ChatGPT to ensure staff understand their capabilities, limitations, and compliance implications. Hence, ensure the team using the AI tool has the education and technical skills to manage it, depending on the risk classification of each tool. 


This development is part of a broader regulatory shift where AI use — not just development — is increasingly subject to scrutiny, risk management, and accountability standards. Aligning early with the AI Literacy Guidelines can support compliance, risk strategy, and stakeholder trust.

Our AI and Tech Regulation team is available to assist on the full range of issues addressed by the EU AI Office, including AI readiness assessments, compliance roadmaps, training program, design governance policy, and regulatory engagement. 

Related News