Shakeel Hashim on AI Journalism
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through Shakeel Hashim on AI Journalism, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 19 full-transcript segments: median 0 · mean -5 · spread -17–0 (p10–p90 -10–0) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Technical lens. Evidence mode: interview. Confidence: medium.
- - Emphasizes alignment
- - Emphasizes safety
- - Full transcript scored in 19 sequential slices (median slice 0).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video RXknLBAWOm0 · stored Apr 2, 2026 · 555 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/shakeel-hashim-on-ai-journalism.json when you have a listen-based summary.
Show full transcript
[Music] hello everyone this is one of a series of short interviews that I've been conducting at the Bay Area alignment Workshop which is run by far AI uh links to what we're discussing as usual are in the description um a transcript is as usual available at a.net and as usual if you want to support the podcast you can do so at patreon.com axr podcast well let's continue to the interview um I'm now chatting with Shaquille hasham hello Shaquille hi yeah so for people who don't know who you are can you say a little bit about what you do so I work at tbell which is a nonprofit that supports high quality AI journalism um I am a grants director there and I am also a journalist in Resident so I do my own AI journalism through through Transformer which is a Weekly Newsletter that summarizes AI news and then I also do my own reporting and Analysis and commentary yeah before before we really dig into it um we're currently at this uh alignment workshop being run by far AI how are you finding it yeah super interesting um I'm mostly focused on AI policy stuff in my DayDay work okay um so I spent less time on the technical side and the thing I found really interesting here is yeah meeting lots of more technical researchers getting a sense of what they're up to what their focuses are um yeah super interesting so yeah I guess you're in a better position than most to talk about the AI media ecosystem um I don't know I'm wondering yeah what's your high level take about it probably two things I think number one is that there aren't nearly enough resources going into AI journalism as that ought to be given the scale of the topic and its potential impact and importance um the second is that I think there's still quite a big disconnect between what journalists think about Ai and what people in the industry think about AI um some of that is very warranted like it's a job of journalist to be skeptical yeah um but I worry sometimes that if journalists don't engage a little bit more with the ideas that are coming out of these with with the ideas that are held by say AI researches that journalism might not be able to keep up with what's happening yeah I guess it's kind of a strange situation so my understanding is that in a lot of Industries there is like news for that industry right so I so for instance like animal farming right my understanding is that there's like Pig the newsletter and like you know every week if you're a farmer you can you can get the the newsletter about pork farming and it'll just have like just you know a bunch of stats about pork farming you know written roughly from the perspective of like people who are like very into the pork farming scene um I I assume that like Bloomberg does something sort of similar with Finance or at least the terminal like I like in some sense naively it might seem surprising that there wouldn't be more um AI news journalism um do do you have a feel for like why that is there is a lot there is more than I would I think have expected a couple years ago um it's mostly concentrated in Tech Publications um that's where it lives although lots of national news desks now have um AI reporters which is great to see um like the New York Times has some people dedicated to their Washington Post Wall Street Journal yeah I think it's still not on the scale I would like it to be um and when I to AI reporters uh the impression I get is that there is so much more they'd like to do than they can do just because there's so much happening in AI all the time that there aren't enough people and there isn't enough time in the day to cover everything you'd want to cover um the reasons for that I think the main one is the economic state of the journalism industry yeah it's not a great time to be in journalism um there are all sorts of structural reasons where the industry is struggling yeah so there just simply Aren't Enough resources to be able to put into this kind of stuff um I also think that the pace of change here is somewhat unique um and though media organizations are responding um it's hard to to respond as quickly as as things are changing I think um like it takes time to build up an AI desk right right yeah as why there aren't like trade Publications though that's a good question I mean like there are Tech trade Publications like the information says one um which does do AI reporting does a lot of AI reporting yeah um yeah I mean there haven't really been actually I don't know if this is true I want to say there haven't been trade Publications for a bunch of stuff in the tech industry like you definitely see them around like more semiconductor stuff and hardware for software I feel like less so um I'm not entirely sure why that is I think it's because like the tech media just kind of fills that role right so what do you mean when you say the tech media so organizations like The Verge um Tech crunch art Technica Venture beat um and then the tech sections that all the big outlets right and I guess I wonder I guess some of this might just be because if you work in Tech there's a good chance that you're also like a tech consumer and so I I see The Verge in ARS Technica as being like stuff for Tech enthusiasts or you know people who want to like talk about the newest iPhone or the newest laptop um and like if that's the same people who are working in the industry maybe maybe there it's just sort of under that umbrella um do you think that could be it yeah I think that definitely is something there I think there is also a thing where like Tech has become so important that it almost outgrows the need for a more like specialized industry covering it if that makes sense so for instance like going back to analogy of farming yeah um there isn't enough demand for like there to be multiple full-time agriculture reporters at the New York Times H Tech is like big enough and important enough that there is right demand for there to be loads like I don't know how many Tech reporters the New York Times has but it's a lot Y and so it gets like subsumed into that the more traditional media kind of in the way like politics does so the second thing you said about your high level take on the industry is that the the journalism Community was disconnected from the beliefs of the people working in the field in a way that you thought was detrimental what what specific disagreements are you thinking of so I think the big one is like the notion of transformative AI or AGI or whatever you want to call it like extremely powerful AI that can do all or very close to all of what a human can do um I think in the industry there is a pretty strong sense that this is Poss POS and imminent um that's one thing I found like in conversations here you've got people talking about the possibility of us having this by the end of next year not that that's likely but that there is a non-negligible chance of that happening I think in the journalism Community those claims are still like very I think most people really don't buy that as an idea I I think people are like very very skeptical that this is possible at all and certainly skeptical that it's imminent okay and I think that's like a very Justified skepticism um because lots of technologists have made these claims over the years that their technology will change the world and lots of the times it is yeah um so I get why you would be skeptical um but I think the difficulty arises in that if these claims are true like if the AI compan is an AI researchers are right really crazy stuff is going to start happening and it feels to me that it would be good for journalism to engage with those possibilities a bit more and and treat them as hypotheticals but engage with those hypotheticals I guess so if we do have ai two years from now yeah what does that mean what does that mean for the economy what does that mean for politics what does that mean for climate what does that mean for catastrophic risks what does that mean for non-catastrophic risks right I think that's worth engaging with a bit more and I think the part of the disconnect is I still see lots of journalists who I think think that the the AI timelines discussion is like marketing height I think it would be good for people to realize that this is actually a much more like sincere belief than that I think yeah I think this isn't just marketing hype I think these people think they're going to do it and I think there there are lots of good reasons to believe that they will do it yeah one one thing I find interesting so from from my own perspective of how AI is covered in the news media so you'll have Outlets like uh the New Yorker that that do like like you'll have these profiles right like someone somebody will do a profile on K Grace or Scott Alexander or Hima and like they'll I think in these profiles you often see the profile basically being very sympathetic at least to the you know at least to the sincereness and like to some degree the like basic rationality of people who think that um AI is coming really soon and it's really scary um but somehow this feels disconnected from coverage like like like I I don't see the New York Times having like a big I don't read the New York Times but I really bet that there at no point in the last month has there been a big column being like Will Harris or Trump be better for artificial general intelligence catastrophes right like I wonder I wonder if you have a take about why there's that disconnect between these two different bits of it yeah I think many journalists treat the ideas that people in the AI ecosystem have as being like kooky interesting ideas and like I think they they're willing to accept that some people believe them like you say in the profiles yeah that you see um but I think they they treat them as almost similar to how you would treat other weird beliefs right right it's like yeah there are these people who think this crazy thing yeah isn't that interesting yeah I guess you have like profiles of like um you know Christian dispensationalists but like there's not a column about like whether Harris or Trump will like bring in the second coming sooner you know yeah I think to do the ler there does need to be like some internalization um and I think that most journalists at least um they just don't buy it um so maybe this gets to your reference of Tarbell like um can you say a bit more like what you're trying to do yeah so we are trying to encourage and create a community of journalists who cover AI with the seriousness we think it deserves okay um so doing that in a few ways we have a fellowship program where we take early career or aspiring journalists we teach them about AI with the help of lots of experts um we teach them journalism skills again with the help of lots of experts and then we have placement programs for them where they then go and work in a media organization and report on AI um with a hope that you know they build up their skills um it's great because it means we end up with more AI journalists and hopefully they're they're well informed and well equipped to do that work really well um we also have a journalist in Residence program where we take mid-career or experienced journalists and we help support them um so that they can dive deep in something um so we've had one person who was transitioning from being a like crypto reporter to being a AI reporter and so they just spent a bunch of time you know building up their sources learning about AI getting really deep in this so try and understand it um we've got someone else who is going to join to work on China AI reporting um because that feels like a really neglected area where there's scope for like really good reporting to be done um and then we just launched grants program where we will fund Freelancers and staff journalists to pursue like impactful AI reporting projects um so that's the kind of thing that requires more time and more resources than a journalist can typically get um and in that we're interested in funding work on AI harms um the kind of stuff that's going on inside AI companies um policy efforts that AI companies are making how Regulators are you know struggling to regulate AI because of budgetary concerns or other things um and also just like general explainers of like complicated topics that we wish more people in the world understood yeah how how much do you think the difficulty is like do you see your mission as like just getting people in who are interested or like just building up skills like you know having some journalist who's interested in AI but like literally there just some facts that like you know you might not know and if you don't know them you can't report as well yeah I think it's a mix um I do think the main thing is funding okay um which is why most of our programs are yeah built around that um I think there are lots of people who want to and are capable of doing really good work on this but there just isn't the money to support them um I do think there's some like education element to this I mean we spend a lot of time in the fellowship on connecting our fellows with really great experts who they might not otherwise come across um both so that they can learn from them during the like Fellowship curriculum but then also so that they can have them as sources going forward and like if they're writing on a topic they know who are the right people to reach out to who have really deep knowledge on this um yeah and I think that there's definitely something I'm interested in exploring further of like are there ways we can help bridge the gaps between what the experts think and are working on um and what journalists know about because I think there is probably scope to do a bunch there there's been some really good work in the climate space on this um where there are a few organizations who I think we take some inspiration from um who try to connect journalists with experts um to help journalists like yeah dive deeper into a topic than they might otherwise be able to what what are these organizations I can't remember the names of them if like climate journalism network is one but I can't remember if they're the one I'm thinking of which either they have very similar acronyms and names keep track of unfortunately um actually speaking of names okay I think that I that has just been bugging me how Tarbell strikes me as a unusual name like I feel like the in the EA artificial intelligence space like every org has like the same you know roughly the same name format like you like you're either a center or your an Institute or you know it's probably the feature of something and it's either Humanity or AI or life or something um but yeah t Tarbell not not the same naming scheme how how did do you know what the name is about yeah um so I can't take credit for it um Killian theive director of the organization came up with that yeah um but it's named after Ida Tel who was one of the first some people credit her with like pioneering um modern investigative journalism so she did a bunch of work into Standard Oil back in the like late 1800s okay um and her work ended up resulting in the breakup of Standard Oil and like breaking the or Monopoly um and was just super important and super um so yeah we're inspired by the work she did many hundred years ago um and we think that yeah we'd love to see more work like that in in other areas gotcha and so closing up I'd like to talk a little bit about your um suback Transformer it's called um yeah for those who haven't checked it out like what or what what are you trying to do with it a few things um so the first is there is so much AI news every week and like so much stuff happens and it is basically impossible for anyone to keep track of it yeah and there are lots of good AI newsletters out there um but the ones from media organizations in particular um tend to focus mostly on like the content that has come out of that media organization so they're slightly less comprehensive um then there are some that like more focused on specific barers so you get some that more focused on like fundraising deals some that more focused on policy um but there isn't really one place where you can go to get everything um so with my weekly digest the aim is to be as comprehensive as possible but as fast as possible so there's only really a sentence on each thing um I elaborate a bit on the bigger stories but from the majority it's like a few words that tell you more or less what you need to know um but you can click through to learn more sure so with that I'm aiming to like just try and make everyone more informed so lots of people in the AI ecosystem read it which I'm delighted by Lots journalists read it which I'm delighted by quite a lot of policy makers read it um and it's just yeah an attempt to make sure that people are keeping up with what's happening because it's such a hard field to stay a breast of um the other is then with my own reporting and Analysis I want to try and draw attention to things that I think aren't getting the attention they deserve highlight arguments that I think are good arguments um there's often quite a lot of tacet knowledge and [Music] um arguments I guess in this space um I find where people will be say compute threshold through a good example which I wrote about this week right um where I think lots of people have very good reasons for why they think compute thresholds good um but I think lots of them haven't been illustated as well as they could be and especially not in like a really short form fashion yeah um so stuff like that I want where I think there's like a really good argument to be made but I've not seen like the good version of that argument be made simply um I hope to like be able to do something that drawing attention to stuff so like I did quite a lot of reporting on sb147 and the lobbying campaigns against that um I think I worked with Garrison lovely on a piece um about the very misleading claims that Andre and hor works and Faith a Lee had made about the bill um that got quite a lot of attention which I was excited about because again that's one thing where lots of people were talking about on Twitter but I wanted to you know have like a more concrete well reported thing explaining like exactly what was going on there um so yeah I guess the aim is to just improve people's understanding of AI and AI policy in particular yeah that kind of reminds me of like like I I've also found this in the AI safety ecosystem there's there's just a bunch of stuff that people talk about but have not necessarily like written or you know you're published anywhere um and this is less true now than I think it used to be in part because more people are just like I don't know spending their free time getting an arguments in the comment section on the alignment Forum or something uh which which genuinely I think is a great public service but like yeah so often there's stuff that like you know people will just say in conversation and you know if you can record it it's great um and yeah it doesn't surprise me that the same thing is true in you know AI some somewhat more generally um yeah yeah so uh thanks for chatting with me um and if people uh are interested in AI or found Shaquille interesting uh you should definitely check out um schill's uh newsletter Transformer thank you very much this episode was edited by Kate brunot and Amber on helped with transcription the opening and closing themes are by Jack Garrett financial support for this episode was provided by the long-term future fund along with patrons such as Alexi maaf to read a transcript of the episode or to learn how to support the podcast yourself you can visit axr p.net finally if you have any feedback about this podcast you can email me at feedback axr p.net [Music] [Music]