California's Trojan Horse AI Kill Bill Must Be Stopped
A Dangerously Delusional Bill, Crafted By anAnti-AI Doomsday Cult Advances in the Senate of California and It Must Be Stopped Before It's Too Late
I call on the California Assembly to vote no on SB1047 and Governor Newsom to veto this insidious and pernicious bill if it does pass, or it's incredibly likely to destroy California's fantastic history of technological innovation.
This is not an AI safety bill. It is a Trojan horse.
It's designed to look simple and measured but it's real purpose is to give a small, fringe group of anti-AI extremists a kill switch over advanced AI, while throwing a monkey wrench into California's tech industry.
The reason is clear. It was designed by a small group of non-profits who believe AI will cause the end of the world. Dan Hendrycks, co-founder of the Center for AI Safety, one of the sponsors of the bill, believes with 80% certainty that AI will end all life on the planet. If you start with that dangerously delusional premise you can't possibly craft a sound, sane, innovation protecting bill. You can only craft a bill designed to dramatically slow down or destroy the growth of AI altogether.
California is faced with a clear choice:
Either accept the dystopian sci-fi hallucinations of this small fringe movement, with some people in that movement beginning to openly call for violent terrorism against AI labs, or bet on California and its unique ability to bring together the best and brightest in the tech world to build the next generation of software.
It's really as simple as that.
Since any bill that starts from the delusional premise of AI doomsday is fatally flawed by design, I urge lawmakers to start from scratch with a clean, clear bill designed to protect California's prosperity and its world leading tech industry. Consider starting with this draft, SB1048, written by pro-California and pro-innovation leaders in the community.
Passing the fatally flawed SB1047 means passing the torch of the future to other states and other countries.
This is not a prediction. It's just basic cause and effect.
If a model company can choose between two states, one with a high and unreasonable legal burden and one without it, which will they choose as business owners?
Governor Newsom has outright said “I don’t want to cede this space to other states or other countries."
That is exactly what the bill will do, Governor, unless you veto it or push back on it swiftly.
It's worse than that too. Not only will you cede this territory to other states, you risk ceding it to non-western dictatorships that do not share our values and who are racing ahead with AI.
China already has advanced advanced video generation models to rival OpenAI, a self-driving train and robo-taxis in production, Alibaba City Brain deployed to dozens of cities to control traffic and prevent accidents, as well as advanced military AI.
We cannot afford to cede the future of AI to anti-democratic regimes.
Unfortunately, the bill has continued to move forward, despite the best efforts of the AI business community and more rational voices to push for much needed changes in the bill. Senator Weiner says he is open to changes. But the changes his team made are insubstantial.
The core of the bill is still utterly broken. When the core is broken it doesn't matter what else you do. Minor changes to the rest of it don't effect the essence. You can put lipstick on a pig but it's still a pig.
The bill makes model manufacturers certify that nothing bad can ever be done with their models in the future. It holds them responsible for everything that goes wrong forever. This is like making Ford responsible for every drunk driver.
Regulate driving. Hold the drunk driver responsible, not the car maker.
Anti-AI advocates like to say that opponents of this bill hate any and all legislation but that's utter nonsense they know it. Senator Weiner even said "Silicon Valley doesn't like regulation." No, we don't like this bill in particular. We are all for speed limits on the road. That regulates the user of the technology, the car.
We are saying very clearly:
Regulate the use case.
Don't hold the models responsible for everyone else's crimes. Don't punish Microsoft because someone at Enron used Excel to defraud investors. Punish the cheats at Enron.
This is not very complicated. It works in all aspects of law and it has for thousands of years.
So how did we get here? Why has this bill gotten so far?
One of the worst reasons that I've heard from from advocates of premature legislation is this persistent catch phrase:
"Look at social media. It's out of control. We should have gotten ahead of it. We've got to get ahead of AI. We've got to do something now."
This is broken reasoning. No regulator could have predicted social media's impact on the world three decades ago. When Representative Cox and Wyden were crafting the section 230 amendment to the Communications Decency Act in 1996, the number one question of many representatives at the time was "What is the Internet?"
This idea that we can "get ahead of it" is an insidiously seductive message. We all like to believe in our own ability to perfectly predict the future. The problem is we're all just hopelessly bad at it. And the farther out that we go the more impossible it gets.
With all do respect to the Assembly members and Governor Newsom and their respective accomplishments, there is no chance that you or anyone else on the planet can accurately predict what will happen in ten or twenty or thirty years. This is not an indictment of your intelligence or skills in other areas of life. It's just a simple fact.
The future is trillions and trillions of variables all changing at the same time. We might be able to predict small, close up events but the farther you get into the future, the more things change.
It's an illusion to believe that the Senators and Congresspeople in the 1990s could have foreseen remote teleworking, trillions of dollars in e-commerce transactions, the end of book stores, virtual reality, cryptocurrency, the darknet, massively multiplayer games, and social media for all its goods and ills. In the Wright Brothers era nobody on Earth, in Congress or anywhere else, is capable of writing the safety requirements of the Boeing 787 Dreamliner. You cannot design meaningful safety standards for how a commercial jet will function in 2024 back in 1904 when Wilbur is flying a glorified kite of fabric and wood for a few circles around Kitty Hawk.
If we can't predict the future, what do we do? Are we doomed?
No. We do what we've always done. We watch and observe and we address concrete, real world problems as they happen in the real world. For instance, where life and death is involved we should hold technology creators to a higher standard. If you are making a self-driving car, which can kill or injure people when things go wrong, you should have a higher standard.
The only way to properly regulate is to watch as the complex evolution of society and technology play out over time. You see it develop for good and bad, because it is always both, and you try to reduce the ills and incentivize the good. That is society and law in a nutshell. Punish bad, incentivize good.
As a community we feel frontier model companies should be publicly transparent on safety measures and realistic safety efforts like red teaming, which is designed to address problems with models now. But we do not agree that model makers, under penalty of perjury, sign a statement saying their model can never be used for anything bad. This is asking for an impossibility.
It's asking people to perfectly predict the future and certify that there model can never be used for anything ill purpose. Nobody can do this and it is likely already unworkable under existing contract law. You cannot ask someone to sign a document that says they have to flap their arms real fast and fly. If you do ask someone to sign it, they can easily argue in court that it is "impossible to comply" and that provision is now unenforceable.
As the California legal team at Stimmel law writes on their blog:
"A party can invoke impossibility and argue that it did not perform its contractual obligations because it was impossible for it to do so."
It is absolutely impossible for a model creator to certify that there is no possible way anything can go wrong with a model in the future. As we already noted, nobody can predict the future perfectly. Nobody is an all knowing oracle, including me and anyone else who tries to predict the future. It will be impossible for them to testify to future events that haven't happened yet.
If the law does pass, every model manufacturer will likely immediately take the state to court on these grounds and win.
Again, this is designed to look like an AI "safety" bill but it's real goal is to make sure that no advanced AI ever develops at all and that of it does, a few self appointed overseers from a fringe movement can kill it.
If your goal is to ensure AI doesn't develop in California and America, or other democratic countries, then you're on the right track because the sponsors of the bill wrote it to do just that.
But if that's not your goal, then I urge you to go back to the drawing board as quickly as possible.
If nobody can predict the future perfectly, what can we predict with confidence?
If this bill passes, the next technology revolution will be outside of California.
If that's what you want, Governor and Assembly members, you're making great strides.
If not, there's still time to change course.