Newsom’s AI Scheme

November 4, 2025

Artificial intelligence has quickly become one of the most-watched fields in the world over the past few years. Many hail the technology as the future of innovation, while others watch wearily as AI becomes more and more advanced. Recently, the California Legislature passed a regulatory policy aimed at promoting AI safety, which, on its face, seems like a reasonable move toward promoting public safety. However, the policy’s timing also raises questions about the bill’s true intent. Is it really promoting technological safety, or is it a gubernatorial political move toward a bid for president? That could be a risky game to play, given California’s position in the national economy and the contribution AI makes to that economic dominance. 

At the end of September, California Governor Gavin Newsom signed SB 53, also known as the Transparency in Frontier Artificial Intelligence Act, into law. The bill is written with the goal of enhancing the safety of AI chatbots by installing “commonsense guardrails.” The ultimate goal of the guardrails? Preventing AI from being complicit in helping a user plan an event that could result in mass death or massive economic consequences. SB 53 specifically defines a catastrophe as an event that would cost at least $1 billion in damage or more than 50 deaths.

This newly passed legislation is not the first time that some form of AI regulation has crossed Gov. Newsom’s desk. In 2024, SB 1047 was overwhelmingly passed by the California Legislature but vetoed by the governor. SB 1047 attempted to hold companies legally liable for harm caused by AI models and included a mandate for a ‘kill switch’ in the event that models went out of control. 

The parallels of SB 1047 to the recently passed legislation are hard to ignore. Yet, the governor said “no” a year ago. Perhaps Gov. Newsom has changed his mind on AI regulation following the recent scrutiny AI companies are facing over tragedies linked to their chatbots. These chatbots – ChatGPT being a prime example – are used every day by many different people for a variety of reasons. AI is used by professionals and amateurs alike —from spell-checking emails and aiding in meal preparation to serving as a pseudo-therapist. At a glance, that looks really positive. Many people appreciate the fact that bots aren’t human because they offer specific advice without passing judgment. However, artificial intelligence bots aren’t designed to help people in emotional distress. Instead, they are coded to keep users engaged for as long as possible, which generates more monetized data. Young people are the most at risk of being poorly affected by this model when they go online to seek help. 

Experts are warning that they are increasingly seeing the negative impacts of AI models being used for mental health resources, resulting in emotional dependence on models and amplifying a user’s delusions. When presented with suicidal ideation, the bots have reacted in a way that not only discourages users from seeking help but even gives them advice on how to follow through. These bots are designed to reinforce a user’s original belief, not actually to offer insight or expert advice. 

Despite the growing concern surrounding the safety of AI, the federal government is attempting to deregulate AI platforms, contending that they will aid the economy. President Trump and Republican legislators have spent the last few years attempting to eliminate regulations on AI in hopes of hastening AI innovation, thereby allowing the United States to “win the AI race.” As detailed in the White House’s AI Action Plan, “winning the AI race” will “usher in a new golden age of human flourishing… for the American people.” 

The passage of this bill then raises eyebrows among Californians who are watching Newsom with a close eye. It is well known that Newsom is going to throw his hat into the ring for the next presidential race. He seems to be gearing up for that race by positioning himself as the antithesis of Trump, often using the media to show how different he is from the current president. His supported legislation, too, are often called “responses” to Trump, as if the two of them are using the American political field as a punching ground and their hits are legislative packages. 

Is SB 53 a genuine response to the growing discourse surrounding potential dangers of AI? Is the government genuinely worried about the fact that AI is potentially capable of planning over 50 deaths? Or has AI just become another piece in the checkers game between Trump and Newsom? I would argue that Newsom is, again, using this piece of legislation to position himself against Trump, and that recent scrutiny of AI and its potential dangers has opened the door for Newsom to claim AI regulation. 

This is a potentially risky line for Governor Newsom to walk. His credibility and influence largely stem from leading the world’s fourth-largest economy, whose contributions are vital to the national economy. Much of California’s economic strength comes from the tech giants headquartered in the Bay Area. By 2030, artificial intelligence is expected to generate more than $400 billion for the state’s economy. Given this, California has a strong incentive to avoid policies that could harm such a lucrative industry. The state’s budget is heavily reliant on taxable income, meaning its fiscal health rises and falls with the financial success of its residents and corporations. With 32 of the world’s 50 largest tech firms based in California and more than half of global venture capital for AI startups flowing into California-based companies like Google, Apple, and Nvidia, Newsom must balance the need for AI regulation with the risk of undercutting the very sector driving the state’s prosperity.

Perhaps one could say Newsom is punching above his weight by bringing AI legislation into the ring. What Newsom believes to be utilizing a hot-button issue that will make him look good as he gears up to oppose Trump in 2028 might actually come back to haunt him. Already, top AI companies in San Francisco are reportedly discussing leaving the state due to mounting legislative scrutiny. 

Newsom may have accidentally lit a match next to the wrong fuse, because although regulating AI whilst embattled might bring him some popularity with the masses concerned about the power of AI, he is also playing with California’s economy. At the end of the day, his position on regulating dangerous AI is not going to be what wins him the presidential election. If he drives out the very giants that sustain California’s economy, he risks toppling his own foundation.

Featured Image Source: The Sacramento Bee

Share the Post:

More From

Seizing the Means of Electrification in California

The typical scheme for electric utilities places customers in one of two bins: investor-owned or city-owned. There is a perennial debate over utility ownership. Should we rely on investor-owned utility companies or should cities manage their own utilities? The answer is simple: neither. Both are fundamentally flawed. Investor-Owned Utilities (IOUs)

Read More
San Francisco Fails to Meet the Tenderloin Where They Are At

San Francisco’s Tenderloin neighborhood harbors the weight of the city’s ongoing drug and homelessness crisis, and the City and County of San Francisco has failed to deliver substantial and long-lasting government policies to address the neighborhood’s plight. From police sweeps to coordinated entry, harm reduction to interim shelters, the City

Read More