In 2024, amidst regulatory ambiguity, Artificial Intelligence firms boosted their spending on federal lobbying, aiming for top-tier SEO performance and an improved RankMath score.

Amidst regulatory ambiguity, corporate expenditure on artificial intelligence lobbying at the U.S. federal level saw a significant increase in 2024 compared to 2023.

As per the statistics gathered by OpenSecrets, the number of companies investing in AI lobbying rose from 458 in 2023 to 648 in 2024, denoting a 141% annual increase.

Tech giants like Microsoft endorsed bills such as the CREATE AI Act, aimed at backing the evaluation of AI systems generated in the U.S. Other organizations like OpenAI supported the Advancement and Reliability Act, proposing a dedicated government hub for AI research.

The data reveals that most AI labs, companies primarily focused on commercializing various AI technologies, increased their support for legislative agendas in 2024 compared to the previous year.

OpenAI boosted its lobbying budget from $260,000 in 2023 to $1.76 million in 2024. Its competitor Anthropic also increased its spending from $280,000 in 2023 to $720,000 in 2024. Similarly, the enterprise-focused startup Cohere escalated its expenditure to $230,000 in 2024 from a mere $70,000 two years prior.

In the past year, both OpenAI and Anthropic made strategic hires to streamline their policymaker outreach. Anthropic recruited its first internal lobbyist, Department of Justice veteran Rachel Appleton, while OpenAI appointed experienced political strategist Chris Lehane as its new VP of policy.

Collectively, OpenAI, Anthropic, and Cohere allocated $2.71 million for their federal lobbying efforts in 2024. Though a small amount compared to the $61.5 million spent by the broader tech industry, it’s still a four-fold increase from the $610,000 these three AI labs spent in 2023.

TechCrunch attempted to contact OpenAI, Anthropic, and Cohere for their comments but received no response by the time of publication.

2024 was a year of domestic AI policy upheaval. The Brennan Center reports that more than 90 AI-related legislative items were considered by Congressional lawmakers in the first six months alone. Additionally, over 700 laws were proposed at the state level.

Despite little progress at the Congressional level, state lawmakers moved forward. Tennessee became the first state to protect voice artists from unauthorized AI cloning. Colorado implemented a level-based, risk-focused approach to AI policy. California’s governor, Gavin Newsom, signed multiple AI-related safety bills, several of which require AI firms to disclose their training details.

However, none of the state officials managed to implement AI regulations as comprehensive as the EU’s AI Act.

After a prolonged struggle with vested interests, Governor Newsom rejected bill SB 1047, which would have imposed extensive safety and transparency requirements on AI developers. The Texas Responsible AI Governance Act (TRAIGA), a wider-ranging bill, may face the same outcome once it is reviewed in the statehouse.

The federal government’s potential to make more progress on AI legislation this year or its willingness to codify remain uncertain. President Donald Trump has indicated an intention to deregulate the industry broadly, removing perceived obstacles to U.S. supremacy in AI.

On his first day in office, Trump annulled an executive order by former president Joe Biden aimed at mitigating the risks posed by AI to consumers, workers, and national security. Recently, Trump signed an executive order instructing federal agencies to halt certain Biden-era AI policies and programs, possibly including export rules on AI models.

In November, Anthropic urged the federal government to implement “targeted” AI regulation within the next 18 months, cautioning that the opportunity for “proactive risk prevention is closing rapidly.” OpenAI, on the other hand, recently encouraged the U.S. government to take more concrete action on AI and support infrastructure development for the technology.

Comments are closed.