The latest on global efforts to regulate AI ― or not — from our sister agency, The Glen Echo Group
Despite AI’s rapid global growth, governments, policymakers, and businesses remain stalled on AI regulation. While stakeholders disagree on how—or even if—to regulate this evolving technology, AI innovation continues to attract investment and user adoption.
This global divide was evident at the Artificial Intelligence Action Summit in Paris in February. While many nations, including France, India, and China, signed the International AI Action Statement, the US and UK declined.
Tracking AI regulatory and governance debates worldwide is exhausting. Luckily, that’s what we at the Glen Echo Group do for a living. A critical part of our work involves monitoring and analyzing emerging technology regulations, predicting their direction, and mobilizing organizations to take effective action. Today, we’ll break down the latest developments in AI regulation and what they mean for tech companies and their users.
For now, Europe leads the charge in AI regulation and passed the first comprehensive AI law in 2024, notable for its scope and complexity. The EU AI Act builds off existing data privacy frameworks to enable self-certification and government oversight of high-risk AI systems and establishes transparency requirements for AI systems. The law has caught the attention of global business leaders because it applies to companies (even if located outside the EU) that develop, provide, or use AI systems. These companies could face significant fines for non-compliance, which is especially difficult when so many other jurisdictions have passed or are in the process of determining AI governance of their own, including Canada, Singapore, Australia, Brazil, and others.
It’s worth noting that despite leading the regulatory charge, the EU has not been a leader in developing the technology. French President Emmanuel Macron has signaled that regulation might not be the key to unlocking innovation, warning at the Summit that “it is very clear that we have to resynchronize with the rest of the world.” U.S. Vice President JD Vance went a step further in his remarks, saying that overregulating AI will “kill a transformative industry just as it’s taking off.”
To that end, the Trump administration has already embraced AI investment and deregulation, declaring in a recent Executive Order that AI policy will seek “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness and national security.” Currently, there is no comprehensive AI legislation under consideration in Congress, and the Trump administration is collecting public feedback for its AI Action Plan, which seeks to, among other things, “suspend, revise, or rescind” much of the previous administration’s Executive Order.
This sets the tone for the current 119th Congress to focus on bills that promote domestic AI development and U.S. competitiveness against China, like the recently introduced proposal to ban DeepSeek on U.S. government devices. The U.S. push for AI dominance raises a key question for Trust & Safety (T&S) experts: How does prioritizing rapid development impact safety?
Too much deregulation risks oversight of crucial safeguards, from digital watermarks for AI-generated content to ensuring diverse and accurate training data, which are essential for AI integrity. Many T&S professionals are currently embedded within technology companies, working to build platforms that better protect their users and reduce the spread of harmful content. AI needs this expertise, as the technology has the potential to spread hateful, dangerous, or fraudulent content at an unprecedented scale. The risks are wide-ranging ― AI could be used to generate Child Sexual Abuse Material (CSAM), produce realistic deepfakes used to spread misinformation or even turbocharge cyberbullying via automated bots, which is why it’s crucial for T&S teams to identify and mitigate these dangers and continue having conversations about the role regulation plays in AI safety.
It’s clear that the media is covering AI regulation as it develops, given its impact will be felt in nearly every industry, including journalism. Many news corporations have inked content licensing deals with major AI companies like OpenAI, though some ― like the New York Times ― have tried a different tactic and sued the companies behind AI models for copyright violations.
As I’ve said, the hype around AI is not an honest educator; it is neither our savior nor our downfall, but another technological advancement that will be as good as we make it to be.
If you’d like to learn more about how AI regulations could impact your business ― or how we can help you develop and amplify AI resources for a variety of audiences ― please contact our team.