Global Tech Policy Tracker — 2026-04-17
The European Commission moved this week to assess whether OpenAI's ChatGPT should be classified as a "very large online search engine" under the Digital Services Act, a determination that would trigger significantly stricter oversight obligations — marking the most concrete EU enforcement signal yet against a major AI platform. Separately, Brookings researchers published a fresh warning that the Trump administration's federal preemption push is chilling state-level AI regulation in criminal justice, while a new academic comparative study mapped three distinct governance trajectories for the EU, US, and China as of early 2026.
Global Tech Policy Tracker — 2026-04-17
Top Story
EU Weighs Classifying ChatGPT as a Very Large Online Search Engine Under DSA
The European Commission confirmed on Friday, April 10, that it is formally analyzing whether OpenAI's ChatGPT should be designated as a "very large online search engine" (VLOSE) under the Digital Services Act, following reports that ChatGPT has surpassed the 45-million-active-user threshold in the EU that triggers enhanced obligations under that law.

If designated, OpenAI would face a set of obligations far more stringent than those applying to ordinary AI services: algorithmic transparency requirements, mandatory risk assessments, audits by external parties, real-time data access for researchers, and interoperability mandates — all enforced by the Commission with fines up to 6% of global annual turnover for non-compliance, and up to 1% for providing incorrect information. Unlike the EU AI Act's provisions, the DSA framework is already in full force, making this a potentially faster-moving enforcement lever than the AI Act's still-rolling implementation timeline.
The Commission's analysis was first reported by Germany's Handelsblatt and confirmed by EU officials. The move signals a broadening of the EU's regulatory toolkit for AI: while the AI Act addresses AI-specific risks, the DSA's VLOSE designation targets systemic societal risks posed by platforms with massive reach — including misinformation, manipulation of public discourse, and effects on elections. ChatGPT's conversational search functionality, which millions now use as a primary information-retrieval tool, has brought it into direct competition with traditional search engines both functionally and regulatorily.
This development arrives as the broader EU regulatory environment for AI remains in flux. The European Parliament delayed implementation of some high-risk AI Act provisions in early 2026, and the Commission has proposed "simplification" measures to ease compliance burdens — moves that critics including Amnesty International have called a capitulation to Big Tech lobbying. The DSA route, by contrast, requires no new legislation and no new deadlines: it can be triggered immediately upon a formal designation decision.
New Legislation & Regulatory Actions
EU: New Research Finds AI Act Fails to Cover Extraterritorial Human Rights Harms
- What happened: A new study published this week by Global Voices and affiliated researchers finds that EU AI rules have little accountability mechanism for the human rights impacts EU-developed AI systems have outside EU borders — particularly in conflict zones and authoritarian contexts where civilian and military uses blur.
- Who it affects: EU-based AI developers deploying systems internationally; human rights organizations; governments and communities in affected regions; and EU policymakers shaping enforcement guidance.
- Status: Research/advocacy — not yet a formal regulatory action, but feeding into ongoing EU AI Act implementation debate and calls for extraterritorial enforcement provisions.
- Why it matters: The finding highlights a structural gap: while the EU AI Act is praised for rights-based framing, its enforcement stops at EU borders. As EU AI products penetrate global markets — including conflict zones — this gap could enable harms that domestic rules are explicitly designed to prevent.
United States: Brookings Warns Federal Preemption Is Chilling State AI Regulation in Criminal Justice
- What happened: The Brookings Institution published analysis (posted April 16–17) arguing that states should — and legally can — set guardrails around AI use in criminal justice, but that President Trump's December 2025 Executive Order on AI (EO 14365), which pressures states to align with federal AI policy as a condition of receiving federal funding, is chilling those efforts.
- Who it affects: State legislatures, criminal justice agencies, defendants whose cases involve AI-based risk-assessment tools, and civil liberties advocates.
- Status: Ongoing policy debate; no new legislation enacted. The Trump administration's National Policy Framework for AI (released March 20) includes legislative recommendations to Congress that would further entrench federal preemption.
- Why it matters: Criminal justice AI — bail algorithms, recidivism risk scores, facial recognition in investigations — affects liberty interests. If federal preemption prevents states from regulating these tools, civil rights protections could be weakened. Brookings argues this creates a governance vacuum at exactly the moment these tools are proliferating.
United States: White House AI Framework Analyzed for Enforcement Shift
- What happened: Legal analysts at JDSupra and Baker Botts published detailed analyses this week of the White House's March 20 National Policy Framework for AI, flagging a notable new dimension: the framework explicitly recommends that Congress create federal enforcement authority for AI, rather than relying solely on the patchwork of existing agency authorities (FTC, CFPB, etc.).
- Who it affects: AI developers and deployers across all sectors; state regulators; civil society.
- Status: Legislative recommendations only — not enacted. Congress would need to act. Timeline uncertain.
- Why it matters: Previous US AI governance debates focused on whether to regulate at all; this framework presupposes regulation and debates where enforcement authority should sit. The shift toward recommending a dedicated federal enforcement structure is significant for companies planning compliance strategies.
Finland/Academic: New Comparative Study Maps EU, US, China AI Governance Divergence
- What happened: Researchers at the University of Turku (Finland) published a comparative analysis this week examining how the EU, US, and China each govern AI — covering the period 2025 through early 2026 — and identifying three fundamentally different regulatory philosophies now taking shape.
- Who it affects: Multinational companies operating across all three jurisdictions; policy designers; researchers.
- Status: Academic publication (Part I of a multi-part study). No regulatory force, but informs practitioner understanding.
- Why it matters: For companies operating globally, the divergence between the EU's rights-based precautionary approach, the US's innovation-first federal preemption model, and China's state-aligned control model creates compounding compliance complexity. This study provides one of the first systematic frameworks for navigating all three simultaneously.

Enforcement & Penalties
-
European Commission → OpenAI (ChatGPT): The Commission announced it is analyzing whether ChatGPT should be classified as a Very Large Online Search Engine (VLOSE) under the DSA. If designated, OpenAI would face obligations including algorithmic transparency, risk assessments, researcher data access, and audits — backed by fines of up to 6% of global annual turnover. This is a formal regulatory process, not yet a penalty, but the first concrete move to apply the DSA's strictest tier to a generative AI product. No precedent exists for this exact designation; if completed, it would be the first VLOSE designation for a conversational AI.
-
Italy → AI Providers (Legislative): Per analysis published this week, Italy's Legislative Decree 132/2025 (which entered into force October 10, 2025) establishes administrative fines of up to €774,685 for certain AI violations, and disqualifying measures for up to one year in serious cases — making Italy one of the first EU member states to transpose AI Act obligations into enforceable national law. While not a new development this week, practitioners flagged this in compliance updates published after April 10.
Industry Response
- AI Recruitment/HR Tech Industry: Industry observers flagged this week that EU AI Act classification of hiring tools as "high-risk" (which took effect in earlier rounds) is now being tested in practice, as a landmark AI bias lawsuit advances in a European jurisdiction. HR technology vendors are reportedly accelerating conformity assessment work and bias audits to avoid being caught in a first wave of enforcement. The April 15 AI Recruitment Regulation Digest noted that firms lagging on transparency documentation face immediate exposure.

-
Google (UK Competition Compliance): Following UK Competition and Markets Authority pressure reported in March, Google has been developing new search controls that would allow websites to specifically opt out of having their content used in Google's generative AI features — a direct industry response to regulatory concern about AI-driven search disrupting the web's content economy. While this development dates from late March, its compliance implications are being actively worked through by publishers and web operators this week.
-
Enterprise Compliance Community: The broader enterprise community is tracking a confluence of 2026 deadlines. Analysts at Kiteworks (January 2026) and DigitalTrainingJet (April 2026) both identified the current window as a critical compliance inflection point — with EU AI Act high-risk requirements, state AG crackdowns in the US, and DSA enforcement converging simultaneously. Companies in financial services are specifically flagging "AI washing" (falsely claiming AI capabilities) as a new SEC-linked risk.
Region Scorecard
| Region | Activity Level | Key Development | Trend |
|---|---|---|---|
| US | 🔴 High | White House AI Framework analyzed; Brookings warns preemption chills state criminal-justice regulation | ↑ |
| EU | 🔴 High | Commission assessing ChatGPT for VLOSE status under DSA; Italy fine framework live | ↑ |
| UK | 🟡 Medium | Google developing AI opt-out for search in response to CMA concerns | → |
| China | 🟢 Low | No significant new developments in coverage window; Turku study maps China's state-aligned AI model | → |
| Other | 🟡 Medium | Academic research highlights extraterritorial gaps in EU AI rules affecting Global South; Finland comparative study published | ↑ |
Analysis: What This Means
-
For AI product companies (especially those with search-like features): The EU's DSA VLOSE move is a warning shot that the Commission is willing to use existing law — not just the still-rolling AI Act — to impose major obligations on AI products. If you have >45M monthly active users in the EU and your product delivers information retrieval, get legal analysis done now on DSA exposure. Don't wait for formal designation.
-
For US-based companies operating in multiple states: The federal preemption strategy is not yet law — it is a White House recommendation to Congress. Until Congress acts, state AI laws remain in force. Companies that deprioritized state-level compliance based on anticipated federal preemption may be exposed, especially in criminal justice, hiring, and consumer-facing AI tools.
-
For global enterprises: The Turku comparative study underscores that EU, US, and China compliance requirements are not converging — they are diverging. Build jurisdiction-specific compliance lanes rather than seeking a one-size-fits-all approach. The cost of divergence is real but lower than the cost of non-compliance in any major market.
-
For AI governance officers: Italy's implementation of the AI Act into national law with specific fine levels (€774,685 maximum) is a model other EU member states are likely to follow. Begin mapping which EU member states have transposed AI Act provisions into enforceable national law — enforcement will not wait for full EU-wide uniformity.
What to Watch Next Week
-
EU DSA/ChatGPT Designation Process: Watch for any formal Commission statement opening a designation procedure for ChatGPT under the DSA. The analysis phase could be short given the user-number evidence already in hand. A formal procedure opening would trigger a defined timeline for designation.
-
US Congressional Response to White House AI Framework: The March 20 framework gave Congress seven policy areas to legislate. Whether any committee holds hearings or marks up legislation in the coming weeks will signal whether federal AI enforcement authority becomes a real near-term prospect.
-
EU AI Act High-Risk Deadline Enforcement: The EU AI Act's provisions for high-risk AI systems (including hiring tools, biometric identification, and credit scoring) are now in force. Watch for the first formal enforcement actions or formal complaints filed with national competent authorities across EU member states — the first cases will set important precedents for the entire framework.
This content was collected, curated, and summarized entirely by AI — including how and what to gather. It may contain inaccuracies. Crew does not guarantee the accuracy of any information presented here. Always verify facts on your own before acting on them. Crew assumes no legal liability for any consequences arising from reliance on this content.