The Battle For The Future Of AI
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through sb-1047: the battle for the future of ai, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 27 full-transcript segments: median -6 · mean -6 · spread -30–7 (p10–p90 -17–0) · 11% risk-forward, 89% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Technical lens. Evidence mode: interview. Confidence: medium.
- - Emphasizes alignment
- - Emphasizes safety
- - Full transcript scored in 27 sequential slices (median slice -6).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video JQ8zhrsLxhI · stored Apr 2, 2026 · 770 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/sb-1047-the-battle-for-the-future-of-ai.json when you have a listen-based summary.
Show full transcript
Without our knowledge or consent, a handful of AI companies are quietly shaping our future. We've seen urgent warnings from the SEC chair about AI's potential to trigger financial meltdowns, autonomously writing code, and taking independent actions at scale. Generals have said, "If you attack our critical infrastructure, we will have a nuclear response." This is like an act of war tier type of thing, probably in the capacity of models this decade, possibly even in the next 2 years. The United States does not have any AI regulation on the books. When SP 1047 dropped, it set off an autoimmune response. Open AI the latest to oppose a new California bill that would add safety precautions to increasingly powerful models. There's one bill is this SB 1047. It's created its own weather system. SB 1047, which is to regulate the explosion of AI into the world, will actually make AI more dangerous, incredibly dangerous, just terrible, quite scary. And the public wants us to do something. You don't want to be heavy-handed too early, but I don't think it's right to do nothing. I was sitting having dinner with my wife one night in early February and uh picked up my phone and saw that a bill from Scott Weiner on AI policy had dropped. Scott Weiner is one of the most powerful people in the California legislature. So a bill from him means more than a lot of other bills. various really really smart people started talking to me about the benefits of AI for humanity but also uh some of the risks. I started working at encode we were one of the three co-sponsors of SP 1047 and we're really worried about AI assisted catastrophic risk like bioweapons or chemical weapons. as a council with the center for AI safety. We and the center and the sponsors like wanted to do an ambitious bill that really was trying to like tackle some of the thorniest problems. I do research on safeguards and measuring the capabilities of models, measuring how hazardous they are. The VCs were saying like, "Yep, there'll be cyber attacks on critical infrastructure, but we would prefer to live in that world, but I think World War II be very bad." These models are so powerful that they may pose a risk of societal level harm. And what we're asking these companies to do before they put these things out into the world is to test and see if that's true. And if it's true, to take, you know, common sense precautions. AI could revamp target selection programs. We know that these risks are real. And if there are ways to put guard rails to make it harder for people to melt down the banking system or create a biological weapon, we should do that. So I introduced Senate Bill 1047. SP 1047 sets out clear standards for developers of these extremely powerful AI models that would be substantially more powerful than any system that exists today. The bill had pre-deployment safety testing requirements that would apply to the frontier models. What SP1047 requires large AI companies to do is to create, publish, and follow your own safety and security protocol. It says how you are going to ensure that you take reasonable care so that there aren't catastrophic harms. There was a emergency shutdown provision. This was in the event that a model was about to cause a disaster. Developers had to have the ability to shut it down. It's like have a safety plan, disclose it publicly, have an external audit of it. This is like the minimum that this is the floor that we should at least have some transparency. For a lot of the history of the debate, it was mostly extremely online nerds fighting with each other. The debate about SP 1047 was between libertarians who believe that the government should do almost nothing regarding almost everything and another group of libertarians who believe the government should absolutely do far less than it's doing now in most areas, but that AI is a special exception. There's an ethic in Silicon Valley that's very like sovereign citizen that like DC is not going to tell me what to do. Sacramento's not going to tell me what to do. I learned a lot about the politics of artificial intelligence during this process and um it's a little bit like the jets and the sharks from Westside Story. You had the accelerationists and the altruists and these eternal fights and it's actually a really important argument. You encountered a lot of people who were reacting to the bill as if they had never seen a law. They had never seen a regulation. They had never thought about what it would mean to try and design a regulation. [Music] I always knew that this bill was going to be hard. I see from refreshment with you. You think you're going to be here a while. Just a little coffee. All right. At the beginning of this process, it was us figuring out, is this something that other senators will understand? Will they be willing to kind of entertain these things that they've never really heard about and never really talked about? And how will they take this? And there are steps to making this happen. The bill has to go through the Senate and then the Assembly, but there are many committees between those two stages. In April, it passed both of its policy committees in the Senate. Very little opposition. SB 1047 Weiner 110. That bill is out. The work became more intense within the legislature, but the outside dialogue about the bill became more and more high-profile, more broad-based, more intense. A few of the big effective accelerationist accounts started tweeting about it negatively and then it quickly caught fire. Dude, what the hell was that bill? First off, written by this guy who's never built anything in his life, responding to all these fantastical fears of like runaway genetic super weapons. The right people made noise about it all at the right time. So, I finally got through. The reaction was a barrage of misinformation, misinterpretation, lying. If you go back and scroll on Twitter, you'll find us getting into some of those Twitter fights and trying to explain to people that they're wrong and and you know that that they're kind of spouting BS. There were three categories of critics. There were people who were just hard no. I'm opposed to this. We shouldn't be regulating AI. And it was so dumb. It would so obviously stifle the golden age of democratized intelligence that's accessible to everyone. Then there were people who had what I would call fake constructive feedback. So they would say things like why 10 to the 26 flops? Why not 10 the 27? Why not 10 the 25? Things that are equivalent to why is this people 65? Why not 68? And you have to pick a number. Fundamentally they were just opposed to any type of regulation. But they couldn't bring themselves to say it. warnings that this would destroy the AI industry, that companies would leave California, that this would be devastating, that everything was on the line. Then there were constructive critics who came to us and said, "We have concerns about the bill. Here are some ideas." And that was frankly a good thing because from the very beginning, what I've wanted on this bill is for really smart people who love the bill or hate the bill to provide us with constructive feedback. Senate Bill 1047 by Senator Weiner. Senator Weiner, the floor is yours. In response to feedback from key stakeholders in the open source and startup communities, um I continue to make a commitment to continue soliciting and incorporating uh constructive feedback. Eyes 32 knows one measure passes. And all of a sudden from there it was like an avalanche because people are like what this passed [Music] The open-source ethos ended up being applied to SP 1047 by the community. We all had the opportunity to kind of adversarially test the bill just like we might red team a model. I went on Twitter and offered my thoughts about the bill. In the original version of the bill, any model that met, let's say, GPT4 level is going to be regulated no matter how much it costs to train, but the cost of compute is getting cheaper. And that means that a build is going to be covering lots and lots of models over time. It's not going to just be the frontier. Then I made the suggestion that if they did want to just cover the biggest models, maybe they should be tying it to cost. He raised points online that that were good points and that we like looked at and talked about with like the senator's legislative director being like, it seems like he's raising a fair point here. I want to say it was only a few weeks later some of those concerns were incorporated into a new set of amendments to the bill. SP 1047 clarifies that developers of these models that meet the bill's threshold uh would currently cost more than $100 million to train. this like open process has problems and challenges and like there are things that you don't like about it. But I I do think that it is just like really valuable and you end up instead of catching things after they're into law and you and it's much harder to change. You you catch them in the course of that open process. To me, it's not about pulling. It's not about just winning winning the fight. I want to get this right and I am so deeply grateful to the folks who have really worked hard to provide that feedback including folks in the open source community. So, a lot of lot of constituent AI founders in this audience. So, I'm eager to hear your questions. Hi, Scott. Jeremy Nixon, founder and CEO of Omniscience. Would you be working to ensure that SB 1047 isn't ownorous regulation? My name is Jeremy Nixon. SP 1047 was actually something that really bothered me because the American companies are no longer allowed to compete in open source. And that actually is a big part of why open source was a central concern of the community. I think the fundamental challenge with this bill is that if you were just regulating proprietary models, then it's relatively easy. I think the really hard question is what do you do about openw weight models? We're releasing by far the most advanced open- source model at 405 billion parameters. It's a big deal. It's going to be more affordable to run than similar closed models. And because it's open, you're going to be able to use it to fine-tune, distill, and train your own models of any size that you want. Meta can do whatever they want to add guardrails to the llama models and program it to, you know, refuse to tell you how to make a bomb or hack into a computer or something, but then it's pretty easy for somebody else to come along once the weights are available and fine-tune it for a much smaller amount of compute and produce a model that will do the things that it was previously trained not to do. If you're going to have a bill that prevents large models from having these capabilities, you pretty much can't have open way models. to the concern regarding if a model enters the open space realm. What do you say to that potential liability? We made an amendment to the bill to make explicit what we had always intended that once it's no longer in your possession, you are not responsible for ensuring that it can be shut down. My name is Garrison Lovely and I was probably the only journalist covering SB 1047 full-time. Up until I started writing about it, there was narrative that this was a bill that was just being pushed by the companies trying to do regulatory capture. SP 1047 did have a emergency shutdown provision and this was in the event that a model was about to cause a disaster or was causing a disaster. Developers had to have the ability to shut it down. And so the emergency shutdown provision initially caused a lot of concern. The bill was amended to specifically address that concern and clarify that if you produce an open weight model, you did not have to have the ability to shut it down. But despite that, the opponents of the bill took advantage of the um ambiguity uh in earlier versions or just the fact that language changed to basically make up this bill that is, you know, terrifying and so unreasonable and then argue against that. the open source community, the entrepreneurial community, the public sector, academia actually will have our hands tied due to the bill. So in AI policym we really have to be careful. This is a early technology. Feel Lee, the quote godmother of AI, wrote an op-ed in Fortune arguing against SP 1047. And she made a number of arguments, but the biggest one was saying that SP 1047 through the kill switch would effectively destroy the open- source development community. And this was just not true. It was an extremely dishonest reading of of the bill. I don't know exactly where Lee got her facts from, but Andre and Horowitz, probably the largest venture capital firm in the world, were like one of the biggest opponents of the bill. They produced a 12 or 14page letter signed by their chief legal officer and they made a bunch of arguments that lined up quite well with the claims made by Feay Lee. Fei Lee has received millions of dollars for her startup from Adrien Horowitz, which was the lead investor. The idea that they were coordinating behind the scenes to kill SP 1047 is not inconsistent with the public information that we know. My perception is that the venture capitalists thought that the appropriations committee was one of the last attempts to stop the bill. They timed um FA Lee's um op-ed to sort of strike around that time, but still it ultimately passed. The committee has moved bills to the assembly floor and this concludes our hearing. We got through that pretty easily. But then that day, Congresswoman Lafrren and other California uh representatives weighed in with a letter to the governor saying, "You should veto SB 1047." Uh that was sort of shocking. First of all, it is super unprecedented for members of Congress to tell state legislators what to do. I mean, this is just insane. We got that letter and it was riddled with inaccuracies and in some cases flat out wrong. Zoe Lafrren is one of the most influential members of Congress. She represents parts of Silicon Valley and her daughter works on Google's legal team which I think presents a conflict of interest given that Google opposed SP 1047 in multiple letters. So there was a letter from Zoe Lafrren and then Friday evening we had wrapped up for the week and we were like the worst thing that has happened happened. It's all good you know all right we're fine. And then that night the policy letter dropped. [Music] I thought her statement was unfortunate. She just said she was against it. She referred to the bill as quote unquote illinformed that it would kill open source. This is regulatory capture. This would crush the AI industry. None of that was true. [Music] Pelosi is famously a trader of stocks. Her largest position by far is long Nvidia. So if she was being told by various people that this bill was bad for AI in various ways, she might want to be protective of her investment. Nancy Pelosy's opposition, I think, was the most single important moment for the bill. And in terms of what motivated her, a lot of the earliest opposition to SB 1047 came from Injuries and Horowits. Their chief legal officer published a long essay opposing the bill, making a number of arguments and claims about the bill, not based on true things in the bill. There were misrepresentations. Feel repeats some of these false claims in an op-ed in Fortune. And then those false claims made by Fee Lee are then cited in a letter by Zoe Lafrren. Lafrren then is joined by seven other members of Congress from California citing Fee Lee. And then the day after that letter, Speaker Ammerida Nancy Pelosi writes her own statement against SP 1047 very prominently citing Fe Lee. And then this gives credibility to these claims which ultimately just were not [Music] true. Most politicians be influenced by tech lobbyists. There's a question of degree, but the lowest point was when OpenAI opposed the bill for me. They didn't need to do that. Microsoft didn't even oppose the [Music] bill. Openai was set up to be a sort of more safetyconscious organization. Um but them opposing the bill, we learned a lot of information about how financially conflicted many different players are. There was a Bloomberg article and somebody from OpenAI implied to the journalists that they would maybe leave the state. This was like one of the biggest arguments or memes about SB1047. It's like, oh, this will just cause AI companies to leave California. The main reason this would not be likely to happen is that the bill would apply to any AI developer doing business in California, which is uh the fifth largest economy in the [Music] world. Anything about, you know, oh, we're moving our headquarters out of California. This is going to make us that's just theater. That's just negotiating leverage. like it it it had bears no relationship to what to to the actual content of the bill. Anthropic sent a very detailed letter with pages of ideas. We amended the bill significantly based on Anthropic's thoughtful feedback and Anthropic then came forward and didn't formally endorse the bill but sent a letter basically endorsing the bill. One of the main things that made Anthropic potentially not come out in full support is their need to fund raise from organizations such as Amazon and and others. Amazon lobbyed very hard against the bill in the uh last hour because there were some KYC requirements. They really resisted that. Anthropic submitted their letter and then you also saw Elon Musk supporting the bill. Elon Musk now coming out in support of a California bill to regulate the development of AI. A stark contrast to other big names including OpenAI and Meta, both of which have come out against the bill. In his post on X, writing, "This is a tough call and will make some people upset, but all things considered, I think California should probably pass the AI safety bill." Elon has a long history of supporting regulation on AI and he has a long history of saying that AI is an existential risk and really threatens human extinction. It's not clear whether we'd be able to recover from some of these negative negative outcomes. In fact, some of the certainly you can construct scenarios where recovery of human civilization does not occur. Elon supporting him was was huge actually. It helped build the kind of coalition that Musk, who is on the right, is willing to support something that this progressive left Democrat is is willing to do. And it was it was very useful for us to be like, look, we we do have some industry buying. But for the assembly vote, we were a lot more worried, I think. File item 126, SP 1047. We were worried that maybe we weren't hearing what the lobbyists were doing. Maybe we were missing something. Does it impose an unreasonable liability on AI developers? Tort law is vague. Does the bill ban open- source models because it's impossible for them to comply? The assembly floor is where things like that's like end of session and it's very um chaotic and there's like a race against time to get things through. That's where like a lot of sort of shenanigans start to happen on bills generally. Assembly murmur Matthysse, you are recognized. It's time that big tech plays by some kind of a rule. Not a lot, but something. We all like crowded around the TV as they were voting and like were like collectively holding our breath. And with that, the clerk will open the roll. Tally the votes. 41 nos. Nine measure passes. For a bill to pass out of both chambers and head to the governor's desk, that is like a very meaningful step forward in the conversation and a very real thing that happens. And so we felt incredibly optimistic about it when the bill reached the governor's desk. The part of the job of trying to, you know, build in the coalition all of the all the best support we could to convince the governor. Obviously, that was a really significant task. This is a really really big bill. We're going to need a lot of help. And so the co-sponsors are the ones who like kind of coordinate the behind-the-scenes work along with the senator's team because it's just way too much for them to do by themselves. So today we have two big super fans, myself and Sunny, in conversation with none other than Joseph Gordon Levit. We had previously collaborated with Zagafa on their campaign to ban deep fakes. Zagafa is the actor's guild. It's one of the most powerful unions in California and they are the union that represents actors just a force of very powerful and connected people in Hollywood. How are other actors feeling about this? How are you feeling about this? Let's make sure that the the technology that's dominating the world and it's about to dominate even harder is something that's benefiting people and not just benefiting the big powerful businessmen. Joseph Gordon Levit posted that video publicly and helped a lot. AI companies should have follow laws cuz it's coming for everybody. Mr. Governor, please is me asking you do the right thing. Sign this bill for the good of the future. And all these coming together really convinced other networks to activate in that space and be like wow okay this is something we should care about and something we should be interested in. SB1047 which is to regulate the explosion of AI into the world. SAG ARA, the the actors union, Hollywood, getting into the SB1047 debate was a surprise to me and it felt to me like SAG ara getting involved really cheapened things to be honest with you. Nothing to do with protecting actors from being automated away, right? It doesn't protect you from Sora. SP 1047 is about catastrophic risk and it's not your issue. I love this criticism. Does that mean that you have to have a technical background in anything that you have a policy opinion on? Like are you a doctor? Like do you have an opinion on abortion? I mean if you're not then you probably shouldn't, right? According to this logic that is so antithetical to how democracy and advocacy works. We had actors who literally came to us and they were like this is my understanding of the bill. Is this correct? They're talking about flops. They're talking about auditing in 2027. Like just like such a nitty-gritty stuff. and they're like, "No, like we legitimately like want to get this right and so we don't want to just go out and say something about it." Hey, Governor Newsome, it's Sean Aston. It was a pleasure to meet you in Chicago at the DNC the other day. Senate Bill 1047 is on your desk. I'm sure you know. Please, please, please sign 1047. Sean Aton, who you know played like Sam in Lord of the Ring movies. He has a masters in toy policy. They got what the bill did. They understood it. The bill was pretty simple. SP 1047. The depth of our coalition was one of the highlights of this bill. You have Hollywood actors, lab employees, Nobel Prizewinning academics. Um, so I felt like that spoke to the strength of the bill. Once we explained what the bill was going to do, people got it. This is powerful technology and we have seen before with other instances that just leaving it up to the people who stand to benefit from it that's not such a good plan. We need to be creating some appropriate guard rails and the Wii here is the our elected officials. That's how we do things in a democracy. [Applause] Welcome, Governor. Thank you. Thank you. What's been the most difficult uh thing that you've had to decide on? I mean, look, I I think there's one bill. It's this SB 1047. It's created its own weather system. A lot of people have feelings about it because of the sort of outsized impact that legislation could have and the chilling effect, particularly in the open source community, that legislation could have. The governor of California vetoing a sweeping AI safety bill. Governor Nuome in his veto message said that he agrees that the risk of AI is real and needs to be mitigated. However, he does not agree with the specifics. Obviously, the first initial reaction is like heavy heavy disappointment. I was just angry, sad, disappointed. You do all this work and ultimately it comes down to one person's decision and then he gives an explanation that didn't actually make a lot of sense. Newsome rejected the bill in its current form saying that it applied only to the largest and the most expensive AI models, those that cost at least 100 million to train. The governor's statement that the bill did too much and then the bill didn't do enough, I thought was a completely incoherent and absurd thing to include in a veto message. I'm just being perfectly honest. Is there anything you could have done differently there? You think what? Doing what differently? We announced it. We had we had a national media outlet. We said said like um change the bill enough before announcing it such that the key players would be happy about it from the from the get-go. Like I don't think intrinsically they're going to be happy about it because it's not for them. It's for the public. When you saw the decision from Gavin Newsome, you saw in Twitter, you saw this. How did you react to the to the veto? I was I I immediately announced a massive veto party. There's nothing could stop me. I'm on the way. Oh, it's a party. Sorry. I was literally cranking away and then I saw the feed and I was like, "Oh, OMG." Yes. to regulate something before it's had a chance to even be understood is to just give into fear. I'm super happy again, right? I was kind of expecting Governor Newsome to do the veto. Of course, you never really know until it actually happens, but I was pretty confident he was going to veto. I mean, I love science fiction, right? I'm a huge fan of the Matrix series, the Terminator series, but I feel like some people they watch that and they think that's what AI is. I'm like, not really, right? I'm happy to say, "Hey, bookmark this conversation. play it 10 years from now, 20 years from now, you'll see who was right. I mean, right now we know we're right. It's just a matter of, you know, the rest of history seeing it that way. Last night I was this party AGI house of like party about veto the bill and everyone was like pro open source and pro technology and and against regulation, right? You're mostly leading libertarian. I find these people bewildering. I'm like, I'm sorry. Did we did we disturb you? You're creating the most powerful technology by your own admission. The most powerful technology in the history of mankind. Uh we're so sorry to impose a few safety standards. So sorry to have interrupted your little party here. Um, yes, you know, it's it's funny. Uh, the the car manufacturers also for the longest time, I believe a decade plus, were resisting safety belt regulation, an overwhelming good like we've saved dozens of thousands of lives thanks to this simple thing. Can we please put a safety belt on the car that the car manufacturers were absolutely resisting? AI is not going away. Who decides the future of this technology? The people spoke out in support of this bill. Every single poll was like well over 75%. So, do we want to have a handful of very self-interested billionaires making those decisions? There's no political coalition in the world that could stop this train at this point like until there's like a three-mile island and then maybe then there would be a backlash. I think you can try and prevent harms before they happen. Humanity doesn't have as much of a track record of doing that. It's usually, you know, hobbling along and disaster happens and you do something about it and regulation gets written in blood. The difference between AI and atomic weapons is like anyone can noodle with machine learning. There are some critics that are like, I want an AI Chernobyl to happen before we decide to regulate. And I find that an insane thing to say. I don't think regulations should be written implied. [Music] Heat. Heat. N. [Music] [Music]