Dowiedz się więcej na temat naszych produktów. Zobacz nasz blog
  • EN
  • PL
  • New AI Code: Voluntary Guidelines or a Breakthrough Test for EU Law?

    In early August 2025 the EU introduced its first voluntary Code of Conduct for general-purpose AI (often dubbed the “AI Pact” or “AI Code of Practice”), intended as a bridge until the formal AI Act takes full effect. According to the European Commission, this code is a voluntary set of principles for AI developers – especially makers of advanced models like ChatGPT or image generators – designed to ensure that AI is developed “in a safe, responsible way and in line with European values”. In practice, the Code lays out guidelines on transparency, copyright and safety to help companies prepare for the new AI Act. (The AI Act itself – the EU’s binding AI law – will impose formal rules and fines for high-risk AI, with key provisions for general-purpose AI coming into force in August 2025.)

    Key Provisions of the AI Code

    The voluntary AI Code focuses on three main areas – transparency, copyright respect and safety – with concrete commitments for model developers. Its requirements include:

    • Transparency and documentation: Companies must fully document the data and design choices used to train and build their AI models, and even report metrics like energy consumption during training and operation.
    • Copyright compliance: Developers are forbidden from training AI on illegally obtained or pirated content. They must respect copyright restrictions (for example, honoring paywalls and robots.txt) so that copyrighted works are not used without permission.
    • Safety measures and incident reporting: Firms must establish processes to address cases where an AI system produces harmful or infringing content. In particular, major incident reports (such as security or bias issues) must be passed on promptly to the new EU AI regulator – within about 5–10 days of discovery. Regular independent audits by external experts are also encouraged to verify model safety and compliance.
    • Identification of AI-generated content: The Code calls for mechanisms that help users recognize AI-generated text, images or video. For example, outputs could carry digital watermarks or clear labels indicating they were produced or significantly altered by AI.. This “traceability” measure aims to fight disinformation by making automated content transparent to the public.

    Each signatory pledges to adopt these practices even though the Code itself has no legal force. In effect, it serves as a preparatory framework so that when the AI Act’s rules apply, companies will already have internal compliance mechanisms in place.

    AI Code

    Industry Response and Signatories

    Reactions to the Code have been mixed. On one hand, dozens of prominent tech firms have signed on. As Spider’s Web reports, about 26 companies – including U.S. giants like Google, Microsoft, Amazon and IBM – committed to the Code’s three pillars (transparency, copyright and security). AI-specialists such as OpenAI and Anthropic (backed by Amazon) also joined, as well as European startups like Mistral AI and Aleph Alpha, signaling that EU and U.S. players alike are participating.. By signing, these organizations agree to be more transparent about their models, use only legally obtained training data, and conduct robust safety testing. EU officials note that firms who sign gain an “administrative advantage” – the Commission will monitor them as cooperative partners rather than launching enforcement actions – which in practice can mean more legal certainty and faster market access for new AI features.

    On the other hand, some of the biggest tech players have balked. Meta’s leadership publicly refused to sign: its global policy chief Joel Kaplan called the EU’s approach “impractical and infeasible,” arguing that the Code creates legal uncertainties by going beyond even the scope of the AI Act.. Google representatives have expressed similar concerns, characterizing the Code as a “step in the wrong direction” for Europe’s competitiveness.. Apple has delayed its own European AI rollout over separate market issues (Digital Markets Act concerns), and Elon Musk’s xAI (maker of the Grok model) signed only the safety chapter of the Code, rejecting the transparency and IP commitments. Industry lobby groups like the CCIA (whose members include Google and Meta) have also openly criticized the voluntary guidelines.

    This split is being watched closely. Some commentators call it a “moment of truth” for the tech industry: signing the Code signals a willingness to play by EU rules, while refusal could invite stricter scrutiny. In fact, media analyses note that even though the Code is non-binding, not signing it may incur consequences – regulators have hinted that companies who opt out might face more rigorous oversight and enforcement under the forthcoming AI Act. Thus far, major American AI firms like OpenAI have indicated they will comply with the Code, while Chinese AI companies (e.g. Huawei) have largely been absent from the list of signatories.

    Implications for EU AI Law and Policy

    The EU’s new AI Code functions as an early test of how EU law can influence AI development. As Press.pl observes, it is explicitly a “voluntary initiative” adopted in 2023 as part of the broader EU AI strategy, meant to set de facto standards until the AI Act is fully enforced. In practice, the Code’s guidelines mirror key elements of the AI Act: transparency about model data, strict conditions on training data, and obligations to ensure safety. By aligning industry practices ahead of time, the Commission hopes to smooth the transition to the binding AI Act regime.

    Crucially, the Code’s approach reflects a phased regulatory philosophy. The formal AI Act (a landmark EU regulation passed in 2021) will impose mandatory rules and hefty fines (up to 6% of global revenue) for high-risk AI uses starting in 2026, with general-purpose AI covered from August 2025.. The Code, entering in August 2025, is essentially a run-up to that moment. Business Insider emphasizes that the Code “will help companies comply with the groundbreaking AI Act,” especially on issues like author rights and transparency.. In fact, general-use AI models (ChatGPT-style systems) already fell under new EU requirements as of August 2025.. The voluntary Code was intended to give firms a head-start: it helps them build compliance processes for the AI Act’s eventual obligations.

    Summary

    In summary, while the Code has no teeth on its own, it acts as a test of EU regulatory influence. If companies largely adopt its measures, it could validate the EU’s collaborative strategy. If resistance prevails, it could signal tensions ahead – as one trade publication puts it, it’s the moment when companies must “play by European rules or risk confrontation with regulators”.. The Commission has framed this as necessary for EU values: ensuring that generative AI is transparent, respects copyright, and keeps society safe during this transitional period.