The global struggle over how to regulate AI

In March 2024, a Brazilian senator traveled on a commercial flight to Washington, D.C. Marcos Pontes, a 61-year-old former astronaut, had become a central figure in the effort to regulate artificial intelligence in Brazil — where a draft bill had proposed serious restrictions on the developing technology. Confident, loquacious, and a former minister of science, technology and innovation, Pontes felt he was uniquely qualified among his colleagues to understand the complicated issues surrounding AI.

Pontes had worked with NASA, attended the Naval Postgraduate School in California, and was less skeptical than many other senators about the major U.S. companies that dominate the AI race. “We cannot restrict technology,” he’d said at one of the AI bill’s early hearings, expressing caution about legislating AI tools that are still developing. Joining him in D.C. was Laércio Oliveira, a fellow conservative member of the Senate committee drafting the AI bill. The two were part of a delegation organized by a Brazilian congressional initiative that engages with the private sector. Its purpose: to have a series of meetings about the drafted AI bill with representatives of the U.S. government and Silicon Valley companies.

The drafted bill was one of the most comprehensive to date outside the West. It proposed a new oversight authority on AI, copyright protections for content used to train AI, and protections of individual rights, with anti-discriminatory checks in biometric systems and the right to contest AI decisions with significant human impact. It banned autonomous weapons and tools that could facilitate the creation and distribution of child sexual abuse material, and put stricter oversight on social media algorithms that can amplify disinformation. Global advocates for AI regulation saw Brazil as a potential model for other countries. But Pontes believed the bill could stifle investment and innovation — and saw it, he later told Rest of World, as “based on fear.”

Pontes didn’t name the Americans he met, but social media posts showed the delegation visiting members of the U.S. government, think-tank staffers, and executives from three major AI companies: Amazon, Google, and Microsoft. Pontes said the bill was a focus of the discussions. “We asked them to analyze our legislation,” he said, “and give us some feedback, tell us what they think.”

Three months later, on the day when the bill was expected to be put to a vote, Pontes submitted 12 amendments, and Oliveira another 20 — helping to trigger a delay. Pontes then opened a series of hearings on the bill, saying it needed more public debate. Regulation advocates claimed that Big Tech representatives were allowed undue time and influence on the discussions that followed. Critiques of the bill came in from domestic industry groups, with one warning that it would lead the country into “technological isolation.” A weakened version of the bill ultimately passed the Senate this past December.

Brazil’s AI bill is one window into a global effort to define the role that artificial intelligence will play in democratic societies. Large Silicon Valley companies involved in AI software — including Google, Microsoft, Meta, Amazon Web Services, and OpenAI — have mounted pushback to proposals for comprehensive AI regulation in the EU, Canada, and California. 

Hany Farid, former dean of the UC Berkeley School of Information and a prominent regulation advocate who often testifies at government hearings on the tech sector, told Rest of World that lobbying by big U.S. companies over AI in Western nations has been intense. “They are trying to kill every [piece of] legislation or write it in their favor,” he said. “It’s fierce.”

Meanwhile, outside the West, where AI regulations are often more nascent, these same companies have received a red-carpet welcome from many politicians eager for investment. As Aakrit Vaish, an adviser to the Indian government’s AI initiative, told Rest of World: “Regulation is actually not even a conversation.”

An abstract illustration depicting two hands exchanging a document with the word

A first wave of regulation

“New technology often brings new challenges, and it’s on companies to make sure we build and deploy products responsibly,” Meta CEO Mark Zuckerberg told the U.S. Senate in 2023, making the case for self-regulation around AI. “We’re able to build safeguards into these systems.”

“Self-regulation is important,” OpenAI CEO Sam Altman said during a visit to New Delhi that same year, as hype over ChatGPT was building globally. Though he also cautioned that “the world should not be left entirely in the hands of the companies either, given what we think is the power of this technology.”

AI’s proselytizers say it will revolutionize industries, supercharge scientific research, and make many aspects of life and work more efficient. On the other side of the debate, politicians, tech experts, representatives of industries affected by AI, and civil society advocates argue forcefully for far stricter regulations. They want early implementation of rules around copyright, data protections, and fair labor practices, as well as uses that affect public safety such as generative deepfakes, the creation of chemical and biological weapons, and cyberattacks.

The most ambitious AI law passed to date, the EU’s 2024 AI Act, offers a template for more restrictive regulation. It bans the use of AI for social scoring purposes, imposes restrictions on the use of AI in criminal profiling, and requires labels on content generated by AI — a move aimed at enhancing transparency and fighting disinformation. It also creates a range of special requirements for developers of AI systems that are categorized as posing a high risk to health, safety, or fundamental rights.

The U.S. has no proposed law for comprehensive AI rules under serious consideration in Congress. At the state level, California has been the most forward-leaning on AI regulation, with its governor recently signing 17 AI bills into law. Their provisions range from protections of the digital likenesses of performers to prohibitions on election-related deepfakes. Canada, meanwhile, is attempting to take a similar approach to the EU, with its governing party proposing to standardize the design, development, and use of AI through the Artificial Intelligence and Data Act (AIDA), which has yet to pass. 

Regulatory efforts in the U.S. and the EU have triggered pushback from tech companies as lobbying activity over AI has spiked. In Canada, executives from Amazon and Microsoft have publicly condemned the legislative effort, calling it vague and burdensome. Meta has suggested it might withhold the launch of certain AI products in the country.

A landmark California bill that would have required safety measures on AI software to guard against rogue applications was vetoed by the governor last year after a campaign by venture capitalists and AI developers that included buying ads and writing newspaper op-eds. OpenAI’s chief strategy officer wrote a public letter warning the bill could “slow the pace of innovation” and lead entrepreneurs to “leave the state in search of greater opportunity elsewhere.”

Tech giants also lobbied aggressively against the EU bill. OpenAI was reportedly successful in arguing to reduce the law’s regulatory burden on the company, and Altman has threatened that OpenAI could leave Europe if he deems regulations too restrictive. Zuckerberg co-authored an op-ed in August describing the EU’s regulatory approach as “complex and incoherent,” warning it could derail both a “once-in-a-generation opportunity” for innovation and the chance to capitalize on the “economic-growth opportunities” of AI.

“We have been clear that we support effective, risk-based regulatory frameworks and guardrails for AI,” an Amazon spokesperson told Rest of World, noting the company has signed voluntary pledges with the White House and the EU for responsible AI development, and has said it welcomes “the overarching goal” of the EU’s AI Act. OpenAI and Google did not respond to multiple requests for comment for this article. A Microsoft spokesperson shared a company publication calling for “balanced efforts to develop laws and regulations” to encourage public trust and AI adoption, as well as “interoperability and consistency” of AI rules across countries. A Meta spokesperson sent a previous company statement on the EU’s AI Act: “It is our priority to ensure that AI is developed and deployed responsibly — with transparency, safety, and accountability at the forefront. We welcome harmonised EU rules. … We also shouldn’t lose sight of AI’s huge potential to foster European innovation and enable competition.”

In many nations outside the West, policy discussions are still developing. Chile is one of the few countries attempting to enact a comprehensive AI law. Its landmark draft bill imitates components of the EU’s risk-focused approach and vows to boost the development of AI while safeguarding democratic principles and human rights. South Korea’s AI Basic Act, passed in December, promotes AI’s role in economic growth while also echoing some of the EU’s ethics, safety, and transparency guardrails.

Other governments have set out policy frameworks that prioritize commercial interests over stringent regulation. Taiwan’s AI Basic Act, for example, states that future legal interpretations concerning AI regulation “should not hinder the development of new AI technologies or the provision of services embedded with AI technology,” according to an analysis by the IAPP, a nonprofit that has tracked AI laws and regulations worldwide. Japan has touted its light approach to AI regulation, prioritizing attracting business and investments. Singapore, another global economic powerhouse striving to become an AI hub, is yet to enact any AI policies, but the government has indicated a preference for targeted sector-specific rules rather than a more sweeping approach.

An illustration of a silhouette of a person walking through a narrow corridor flanked by rows of dark data server racks illuminated by faint red and orange lights.

“Innovation will happen. Regulation will follow.”

In some non-Western countries, the priority isn’t regulation at all, but rather courting large AI companies to make massive investments. 

India, for example, is home to the world’s largest internet base outside of China and has an extensive history of regulating Big Tech companies. Prime Minister Narendra Modi’s administration has put Apple, Google, Amazon, and Meta under fire for anti-competitive practices, and, in a 2021 Digital Media Ethics Code, laid out several controversial mandates for social media platforms, such as adding traceability to encrypted messages and honoring government takedown requests. With AI, though, the administration has signaled a different attitude, positioning India as a magnet for Silicon Valley funds.

“This is the era of AI, and the future of the world is linked with it,” Modi declared in the fall of 2024, embracing a Big Tech state of mind. When he joined Silicon Valley CEOs at a roundtable in September, he urged them to “co-develop, co-design, and co-produce in India for the world,” according to a government press release. The government has devoted $1.2 billion to an initiative called IndiaAI, designed to build out the country’s AI capabilities. 

Industry sources involved in policy discussions with the government told Rest of World the current climate for Big Tech on AI in India is warm and regulation-free. Large U.S. firms are making their influence felt by channeling AI investment into domestic companies and government projects while deploying AI products around the country. OpenAI has promised to support the IndiaAI initiative by heavily investing in the developer community. Meta has vowed to partner with it to “empower the next generation of innovators … ultimately propelling India to be at the forefront of global AI advancement.” Amazon has earmarked millions of dollars to back Indian AI startupsbillions more to grow its data center footprint, and forged a multiyear AI collaboration with the government-run Indian Institute of Technology (IIT) Bombay. Microsoft recently committed $3 billion to AI training, and cloud and AI infrastructure. Google is “robustly investing in AI in India,” its CEO has said, “and we look forward to doing more.”

Aakrit Vaish, the adviser to the IndiaAI initiative, told Rest of World regulation isn’t seriously being discussed at the moment in government and industry circles. “A lot of the conversation just in the building is about building AI for India, and everybody wants to be involved in that.”

India’s own AI industry relies on Silicon Valley companies, noted Sangeeta Gupta, who leads strategy at Nasscom, the country’s top IT lobby. Most homegrown AI startups “are building AI models on top of [U.S.-made] platforms rather than building something that is totally, totally ground-up,” Gupta told Rest of World. One of India’s biggest AI startups, Sarvam, for example, builds on Meta’s set of large language models, Llama, and has partnered with Microsoft’s Azure. GoogleAmazon, and Microsoft are pouring millions into young Indian AI companies like Sarvam.

“Some of these Big Tech players do influence government policy. And it’s not just here in India. It’s almost across the globe,” Salman Waris, a lawyer focused on regulation and AI who advises the Indian government’s IT ministry, told Rest of World. “Just because they have access, and their sheer size … and the fact that when they try and make an investment, it’s a big advantage to the government.”

The government so far has been open to “a soft-touch approach to AI,” Waris added, but a clearer sense of its mindset should come with a draft of the Digital India Bill expected this year. There has been dissonance in the past in which the Modi government has signaled a pro-business mentality publicly, Waris said, but when regulatory proposals are ultimately released, “they seem to be having a very different approach.”

India’s minister of electronics and information technology did not respond to a request for comment. He had said in December that while the government was open to regulating AI, it would take a “lot of consensus,” stressing that India should remain “at the forefront of ethical AI development.”

Previous Post Next post
article