Preparing for an AI Economy (with Daniel Susskind)
Why this matters
Safety is not only about model behavior; this episode highlights second-order effects on people, institutions, and labor markets.
Summary
This conversation examines society and jobs through Preparing for an AI Economy (with Daniel Susskind), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 62 full-transcript segments: median 0 · mean -0 · spread -10–19 (p10–p90 -7–5) · 0% risk-forward, 98% mixed, 2% opportunity-forward slices.
Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.
- - Emphasizes safety
- - Emphasizes labor market
- - Full transcript scored in 62 sequential slices (median slice 0).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video 17_KoZgjm8k · stored Apr 2, 2026 · 1,634 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/preparing-for-an-ai-economy-with-daniel-susskind.json when you have a listen-based summary.
Show full transcript
Wherever I am, whoever I'm talking to, the question I get asked the most is always the same, which is, "What on earth should my children do?" It's far better to think of ourselves at sea on a little boat. And we can pull up our sail and go faster or pull it down and go slower, but we also have a huge amount of discretion over the kind of direction of technological progress as well. AI is not the same thing as social media. If we bundle technology into a kind of monolithic indivisible lump of bad stuff, parents are going to let down their kids in preparing them to use these technologies. One of the reasons I'm hopeful about the future is because of the possibilities of AI, particularly in the educational setting. I think we can, if we get it right, use it to do really extraordinary things. Welcome to the Future of Life Institute podcast. My name is Gus Ducker and I'm here with Daniel Suskin. Daniel, welcome to the podcast. Pleasure to be with you. Thanks for having me. Let's start with hearing a little bit about you and your career. Could you quickly introduce yourself? Sure. So, I'm Daniel Suskin. I'm an economist and a writer. My interest is really the impact of technology and particularly AI on work and society. I have been exploring this issue for the last sort of 101 15 years or so sort of written three main books exploring this issue back in 2015. A book called the future of the professions which was looking at the impact of technology and AI on white collar workers in particular. Then 2020, a book called A World Without Work, which was looking more generally at the impact of technology on the world of work. And then last year, a book called Growth, a reckoning, which was a sort of broader look at the sorts of technologies that we develop in society and this tension between the fact that technological progress and growth is associated with almost every measure of human flourishing and yet is also seemingly responsible many of our greatest challenges too. And then you have an upcoming book about the future of work for our children. Maybe say a little bit about that. The observation I make is that you know this this work has taken me to all around the world spoken to thousands of organizations, hundreds of thousands of people and yet wherever I am, whoever I'm talking to, the question I get asked the most is always the same which is what on earth should my children do? Mhm. And and there's always a kind of frustration because you always get asked that question and you have a couple of minutes to answer it and I think everyone sort of leaves that interaction feeling a bit disappointed. The person who asked the question feels that the answer was a bit you know shallow. I leave it feeling I had so much more to say and so I wanted to to write a book exploring exactly this. So the the new book is exactly that, you know, what should my children do? How how to flourish in the age of AI drawing on all the sort of thinking and conversations and experience of the last decade and a half. That's great. And we're going to talk about that book also in this conversation, but I want to start a different place. On this podcast, I interview a bunch of AI researchers and they have a perspective on AI and the and the economic impact of AI that differs from from how economists tend to think about it. So, how do you how do you think economists and AI researchers disagree on on the future of AI and and the economic impact? I think it's changed a lot in the last decade or so. the the the kind of issue that I really cut my sort of intellectual teeth on back in the beginning was a sense that economists were systematically underestimating the capabilities of technology. This was the observation I was making back in sort of 2010 2011 that in particular a sort of governing idea in in the sort of economic literature was that well that machines can perform routine tasks and activities but they can't perform non-rine ones things that require faculties like creativity and judgment and empathy. And yet what we could see even back then was gradually but pretty relentlessly more and more non-rine tasks were being taken on by the latest technologies and eventually by by AI. And I was interested in you know what why it was that economists were making this sort of systematic mistake in thinking about the capabilities of technology. And what emerged from that was sort of a realization that economists were using quite an old-fashioned conception of the how it is that machines and technology works. A view that if you wanted to automate a task, you had to sit down with a human being, get them to articulate how they perform the task, and then write a set of instructions for a machine to follow. uh and you know that was true 40 50 years ago when pe when people in AI were working on expert systems and it was all very top down and and and it's true that you know back then if a human being couldn't explain the sort of particular rules or you know kind of thought reasoning processes that they went through. It was very hard to see how you might automate a task. But you know, of course, what's happened in the decades since is that machines don't have to follow the kind of explicit instructions articulated from the top down by human beings. You know, we make medical diagnoses now through AI, not by trying to copy the kind of particular rules that a doctor follows set down for the system to follow, but by learning from the bottom up, you know, through lots of data. And and so this was this was the sort of I think this was the the thing that the mistake that economists were making. It was not quite appreciating how it was that tech these new technologies were working and and what that meant for the kind of traditional boundaries they drawn between what tasks machines could and could not do. for computer scientists. I think in the way in which and you still see this to some extent today but less less than you did but again 10 years ago I think the way in which many people in AI computer science were talking about the impact of technology on work were neglecting a sort of fundamental realization that economists had made in thinking about the impact of technology on work which was that technology can have two very different impacts on work. On the one hand, it can substitute for human workers, displacing them from particular tasks and activities and you know those are the sorts of examples that I think you know capture the sort of capture our imagination you and you know capture the sort of headlines in the popular press the moment a machine outperforms a doctor or outperforms you know human driver or whatever it might be but there's also and so there's that sort of harmful effect of technology on work and I think that's what many you know technologists were focused on but there was also a far more helpful effect of technology on work which is that it sort of in it could complement workers as well. it could increase the demand for human beings to do tasks that hadn't yet been automated. And the way in which that process worked was far more subtle and and the actual impact of technology on work depended upon the sort of battle between these two forces kind of harmful substituting force a helpful complimenting force and I think you know a decade or so ago computer scientists AI researchers technologists in general were quite bad at recognizing the sort of dual nature of te the impact of technology on work. You know, in in short, I think economists have learned a lot in the last few years by understanding far more about how these technologies work and what that means for their capabilities. I think similarly computer scientists have learned a lot about how the different ways these technologies can actually affect the work that people do and as a result the kind of indeterminate the kind of uncertain you aggregate effect that these technologies can have on work. It's not as straightforward as focusing on these sort of cinematic substitution effects. One complaint I hear from economists uh is also that it takes that computer scientists and AI researchers in general underestimate how long how much time how much effort is involved in implementing these technologies into existing companies and workflows and how long it takes for these technologies to diffuse in the in the economy uh in general. So having demonstrating something in in a controlled lab setting or in a in a test environment is very different from from it being being implemented in into the economy. Do you think that's that's true? Because you can also make the opposite case that something like coding for example is read is ready to be implemented basically the moment it's created. So, so I think what you're describing is really important and it's important for explaining one of the kind of big puzzles about technology and economics, which is that anecdotally we appear to be surrounded by stories almost every day of technologies taking on tasks that we thought only human beings alone could do. You know, remarkable stories. And yet when you look at the productivity statistics, you know that that famous line from that economists have said over the decades, which is, you know, you see technology everywhere apart from in the productivity statistics. I think one of the kind of really compelling explanations for that is exactly as you say, the lag between the kind of invention, the innovation and then the amount of time it takes for these technologies to actually kind of find practical use and then diffuse through the economy. Um and you saw that you know say in the industrial revolution you have this kind of wave of extraordinary technologies around 1780 you know the spinning jenny the roller spinner the power loom all of these and yet it takes many decades for the effects of those technologies to start to appear in the productivity statistics and is that the most important metric productivity or what should we be measuring what should we be noticing is it GDP is it productivity is it unemployment rate what is most important for us to notice when we want keep track of AI. Thinking about Britain today sitting here, you know, as I look around, public services backlogged and broken, average real wages haven't really risen for 16, 17 years. They're sort of worse since the Napoleonic Wars, worklessness rising. There are very few problems in the British economy which would not be solved by more productivity growth. uh and so you know if you're looking for and you know it's true more generally that almost every measure of human flourishing is associated with a growing economy and a grow and and a growing economy requires productivity grow. So I think you know in thinking about the sort of the the benefits of technological progress uh productivity is is really really important. Um so that that's one measure that we ought to be you know paying attention to and trying to understand you know are we and trying to also understand you know the limits in which we the limits of the ways in which we measure productivity and this sort of productivity par paradox that we see all these technologies around us and yet they don't seem to be making a difference to the sort of to the way in which we traditionally measure them which is the productivity statistics but I also think I think you're exactly right as well to sort of point to the unemployment. I think you know a running theme through my work is that we're just not taking the impact of these technologies on the work that we do seriously enough. And as a result, I think paying attention to, you know, what I what I write a lot about this idea of technological unemployment is is one of the things I'm paying, you know, particularly close attention to as well. I think it's worth saying though, and it's quite important, particularly in the sort of shorter run, that it's unlikely that we're going to see mass pools of unemployed people due to, you know, technological change. is far more likely that what we'll see and to some extent I think we already do is technology not affecting the sort of quantity of work but the quality of work not the sort of number of jobs but the nature of the jobs that are out there um and I think in many respects whether it's you know the pay of the work that's available or you know all the different dimensions in which you know the quality of work might run the way in which those are being either improved or indeed degraded by technological progress is really important is Is there a way for us to to capture the productivity without getting the unemployment? So is there a way for us to steer towards technology especially AI that complements labor as opposed to replaces replacing workers. Yeah. I mean this is the this is one of the big hopes and I think the kind of general philosophy is right which is that I think very often when policy makers and politicians talk about technological progress the sort of the metaphor they have in mind is that they're like a sort of train driver sitting in a a train and they can kind of push down on the throttle and speed up and get more technological progress or pull back on the throttle and slow down and get less technological progress. But the sort of their direction of technological travel is kind of fixed by the the rails that are set down for them to trundle along. I just don't think that's the right. And so in a sense the only question that matters is do we want more or less technological progress? And that's not right. And this is you know a big argument of my most recent book that that actually a far better metaphor is a sort of nautical one. You know, it's far better to think of ourselves at sea on a little boat and we can pull up our sail and go faster or pull it down and go slower. But we also have a huge amount of discretion over the kind of direction of technological progress as well. The sort of nature of technological progress and yeah, I think that's true of the impact of technology on work as well. That just to go back to that distinction before, technology can have very different effects on work. It can complement, it can substitute, it can reduce the demand for the work that human beings do or it can increase it. And that characteristic of technology isn't fixed. You know, it can be shaped by the incentives we create in the economy. So do you know just just one example it's really interesting that in every year since 1981 the effective tax rate on hiring a human worker has essentially been higher than using a machine. In other words at the margin the US tax system seems to incentivize you know replacing a worker with the machine. In other words it seems to incentivize the development of technologies that substitute for workers. And so you might say that's the sort of you know incentive that you might actively intervene to change. So there's quite a there's kind of really interesting and I think important conversations at the moment about how we might steer technological progress exactly as you say away from technologies that substitute for workers towards those that complement them. But do you think do you think that's ultimately possible? I I I'm I'm guessing there will be strong economic incentives for developing technology that that substitutes for for human workers as opposed to compliment human workers just because if you fully substitute a worker, you can then cut out the middleman, so to speak, and and get the productivity without having a human without having to pay a human worker. Yeah. So, I do I do think there is a lot we can do with this, but I do think there are limits. not not the limits that you're describing but one of the big limits is just practical which is that Xante it's quite difficult to know the impact that a technology is going to have on the labor market if you say to a computer scientist well is this going to substitute for a worker or compliment them you know they'll say you know I don't know I need to you know see it in the market I need to see how people use it and so it's quite difficult to anticipate on that point actually What what what's the best research we have for trying to predict predicting whether some technology will substitute for human workers or compliment human workers. I I I don't really think that that literature exists. It's not something and just to kind of you know explain why think about something like the automatic teller machine you know the ATM when that was released just you know intuitively it feels like you know this is going to be you know a disaster for people working in banks you know they you know their job is to hand out money over a you know a desk oh my gosh you know the ATM automates this is going to you know, the decimation of the kind of, you know, the the sort of the the world of bank employees. And yet, if you look at what happened in the United States once the ATM was rolled out, the opposite happened. You had a sort of surge in, you know, bank tellers. And the reason why that happened was in retrospect entirely understandable but Xante quite difficult to anticipate which is that you know in part the nature of what it meant to work in finance changed and so people weren't handing out cash anymore but they were you know offering financial advice or you know offering kind of you know types of you know financial new financial products and so on and so the nature of the work changed and but also you know the US economy grew as well and so you know the there was you know new demand new types of financials. So so the the point again which is that this kind of helpful complimenting force works in ways which are quite subtle and quite hard to anticipate in advance. So that that's part of it which is that Xant is just very difficult to anticipate what impact a technology is going to have on work. There's also the added complication which is that kind of expost the impact that these technologies have on work can change as well. You know, if you think of something like a GPS system in a car, you know, in a world with human drivers in the seat, a GPS system complements human drivers, right? It makes them more productive at the wheel. It allows them to navigate unfamiliar roads. It's it's a compliment. But in a world where we have driverless cars, well, these GPS systems just complement the complement the sort of the driverless car instead, making them better. So you the this the same technology over time the the effect that it has on the demand for work can change. There's also I think and this is a kind of deeper and deeper question which is not a kind of technical one about whether or not technologies are going to complement or substitute but a kind of moral one about yeah the the implicit assumption in redirecting technology away from technologies that substitute towards those that complement is that there is something important about work that we want to protect. And two obvious reactions. One is that work is an important source of income. You know, it's the main way that we share our income in society. For most people, their job is their main, if not their only source of income. It's also a way of allocating meaning and purpose as well. Of course, you know, work isn't simply a source of income, but it's also a source of direction and fulfillment, structure. Um, and so you might say because of those things, you know, there are good reasons to want to redirect technological progress away from those that substitute towards those that compliment. But as as you'll know, there are many people who argue well well there are other ways of sharing out income in society other than through the work that people do. You know, work is not the only way of solving the sort of distribution problem of how you share out prosperity in society. And well, hold on a sec. Also, when you think about work and meaning, lots of people really don't like the work that they do. And lots of people would, if they could, you know, find meaning and purpose outside of the world of work. Um, and so, actually, it's not entirely obvious that we want to be steering technology away from substituting towards complimenting. In fact, we ought to be doing the opposite. And a world with less work where people get their income from nonwork and you know opportunities and find meaning and purpose outside the labor market is perhaps a world that we ought to you know welcome and and you know there's there's there are lots of kind of thinkers and writers who argue exactly that. So there are both kind of I think important sort of technical and also important kind of moral reasons to wonder whether the project of redirecting technological progress is as straightforward as some of its advocates suggest today. I think you know I think there is some merit to it but I don't think it's the panacea that that that many hold it out as. Do do you think these decisions we make on a on a societal level will play a large role such that it will show up in the in the economic metrics and and statistics and what I'm asking here is for example do you think do you think we'll simply decide that some jobs are that we don't want to automate or substitute some jobs there's sort of two questions here one is you know we live at a time When the leaders of all the large AI companies say we are going to build a system within a decade that can outperform human beings at every cognitive task that they that they do. And you know there are good reasons not to take those claims entirely at face value. Not least the kind of extraordinary financial incentives that these companies have to you know talk up the capabilities of their technologies. But yeah that there are that said there are you know very few examples of technical problems that we have invested as much finance and as much human capital in the pursuit of as the pursuit of AGI. you know, we've currently invested Stuart Stuart Russell, the computer scientist, estimated something like 10 times what we did during the entire Manhattan project uh in the pursuit of AGI. You know, it's just it's it's it's enormous. And so I think asking the question, you know, what if we succeed? What if we build these systems? And you know what work might remain for human beings to do even in a world where these systems and machines could do everything more productively than us or at least every economically useful task more productively than us. I think that's that's an important question. That's a kind of technical question about what work might what work from a kind of technical point of view might remain even in a world in which machines can do everything. And I think the answer to that question isn't obviously nothing. Yeah, I think there are quite interesting reasons to think that there is still work that human beings will do even in a world in which machines could do everything better than us. But then there's also the question that you're asking which is more of might there be less from a kind of technical point of view and more from a sort of moral point of view. Might there be certain roles, certain jobs that we want to protect from automation that it might be these technologies could do them but we collectively decide that they shouldn't do them from a moral point of view and I think there are also some of those as well. Yeah, it actually the first point it sounds like a contradiction to say that AIs will be able to do everything but there will still be work for humans to do. What could be some reasons for that? There's a few reasons. I mean one is the sort of economic reasoning that that comes from the sort of inter world of international trade which is the kind of just comparative advantage that you know if you think about what happens when countries trade you know if you think about a simplified story with say just the US and Vietnam and imagine there are just two goods that are produced robots and rice. The United States in theory, you know, has the sort of absolute advantage in both of those things. It could produce robots more productively than Vietnam and it could also, you know, produce rice more productively than Vietnam. But it it doesn't make sense for the United States from an from a kind of efficiency point of view to do everything. It makes sense for the countries not to follow their absolute advantages, in which the US has an absolute advantage in everything, but to follow their comparative advantage. What are they relatively better at? And the US is likely to be relatively better at producing robots than rice. And Vietnam's likely to be relatively better at producing rice than robots. And so the countries specialize in producing one of the things and then they trade in exchange. And that way the kind of collective pie is is greater. and and the logic of comparative advantage applies equally well, it seems to me, not when we're thinking about, you know, thinking about the US and Vietnam, but when we're thinking about people in AI, you know, even in a world in which AI could do everything more productively than human beings, it doesn't necessarily make sense from an efficiency point of view for AI to do everything. You know, human beings are a productive economic resource. you know they should do what they what's in their comparative advantage and similarly AI should do what's in its comparative advantage. The now now that that's not to say though it's one thing to say that there might be tasks and activities in which labor retains a comparative advantage. It's another thing to say that there's going to be enough demand for those residual activities to keep everyone in well-paid work doing only those. So those are kind of two different observations but at least from a kind you know there are kind of there are economic reasons to think that even in a world in which these systems and machines can do everything labor will still have a kind of comparative advantage in certain things. Now as these systems become just relentlessly more capable you know that comparative advantage might shrivel and the demand for those residual activities might shrink even further. again, you know, it's not it's not a necessity that it means there's going to be enough work, but it's it's a poss, you know, that is a possibility. But but then there are also I I think alongside these sort of economic thoughts around comparative advantage, there are also sort of I I think reasons to think that there are certain tasks and activities that we have a kind of a taste taste or a preference for how those tasks and activities are performed, not simply by how well they're performed. In other words, the process, not just the outcome. Yeah. So, this is this is what you talk about in what will remain for people to do a a recent paper. And you you you separate these uh preference motives into aesthetic, achievement, and empathy. Maybe we could talk about those in turn. Yeah, sure. I should say just this project. I think just to frame the kind of project you know I do I don't think you know the challenge for now in the world of work because of AI is not that there aren't enough jobs for people to do the challenge is that there is work and for various reasons people can't do that work and I I call this sort of frictional technological unemployment and you know there's there's various reasons for it you know people might not have the right skills to do the work that's available they might not live in the right place that work has been created they might have a particular conception of themselves that's at odds with the available work and want to stay out of work to protect that identity. I think those are the sorts of challenges that we face now. But but I do think in light of the kind of technological developments that are taking place and in light of what, you know, those at the Vanguard are, you know, telling us, we also need to ask the question, you know, what if they succeed? What if we build AGI? What does that mean for the world of work? And that's where this paper sits. that was this paper kind of and it was published by Colombia a few weeks ago. This paper is in that setting. It sort of takes the premise of AGI is given and then says what what will remain for human beings to do and and yeah so there's this sort of there's these kind of economic reasons around comparative advantage but there's then also these sort of preference reasons preference limits and these are all different cases in which we seem to value how a particular task is done not simply how well it's done so you know the aesthetic limits are really interesting you know when you walk into the cyine chapel and look at the ceiling, you think, gosh, isn't that beautiful? But you also think, isn't it remarkable that a human being did that? In other words, we value the fact that that kind of painting was done by a human being, not simply that it's extraordinarily beautiful. And you might say that so long as we value the process through which a task is done for these sorts of aesthetic reasons then by definition these are the sorts of things that these technologies will struggle to do because they are not they are not human and you know there's more prosaic examples of it. you know, we value a, you know, the fact a suit is, you know, handcrafted by a tailor or we value the fact a, you know, chocolate is, you know, hand molded by a human artisan chocolier or, you know, more more prosaically, I love the there's a great story about a Michelin star restaurant in the UK that sold automated coffee, you know, capsules as their coffee can't I I don't know what machine it was but they he was selling automated coffee capsules for their coffee and without telling the customers in the restaurant and when the customers found out in the restaurant they were completely furious and the reason for their fury is interesting which is you know in in blind taste tests people often struggle to tell the difference between a sort of automated coffee and one handcrafted by you know sort of great barista but what they were complaining about wasn't actually the taste of the coffee. It was the fact that they were paying they thought for a kind of Michelin star process. They wanted the kind of the artist craftsman making their beautiful cup of coffee and they instead were getting a machine. Yeah. I mean, how should we think about this then? Is this simply an instance of the labor theory of of value kind of reemerging where people think that the labor that went into creating something is is the value of of that thing or or is there something deeper here? Is it is it the that the labor that went into something becomes in some sense part of the story, part of the product, part of what you're buying? Yeah, I don't think we need to go Marxist on it. I I just think it's people have taste and preferences not simply for outcomes, but also for processes. Let me give you an example. So, that's this kind of aesthetic one. There's also I think interesting reasons of achievement as well, achievement limits too. You know, anybody interested in AI would have followed over the decades the sort of progress in chess playing machines. And today the the very best chess machines could beat the very best human beings at a game of chess. And yet when the very best human beings sit down to play each other at chess, there is huge demand to watch them jewel. And the reason is because we like to see, you know, we don't simply care about the kind of efficiency in which the pieces are moved around in on the board. We also care about who's doing the moving or what's doing the moving. like to see human beings outperforming relative to some sort of standard benchmark of achievement. And so, you know, there are lots of areas of our lives where again, for reasons of achievement, we value not simply how efficiently a task is done, but also how it's done. And that is an interesting role I think for for human beings as well. There's also a kind of I think a final type of sort of preference limit which are the kind of you sort of empathetic or emotional ones. Actually before we get to the empathetic one, the achievement one, how so which jobs do you imagine would would exist in the future based on our preference for human achievement? Is it that because we can't all be Magnus Carlson or we can't all all be bold or or someone who who kind of pushes the limits of what humans can achieve. So how how will this give rise to to jobs? Yeah, I well I think I think you're exactly right and again I don't want to slide from making the claim that there are some tasks and activities that will remain for people to do the narrow claim to a kind of broader claim which is that there's going to be enough demand for those tasks and activities to provide everyone who wants it with well-paid work. Yeah, those are two very different two very different observations and you know just more generally I think it's a mistake that people make when they think about the kind of you know longerterm future of work which is it's one thing to say this is a task that human beings will always do and it's another thing to say there's going to be enough demand for that task you know to provide everyone with a job doing it. I think the sort of I think anything involving a kind of degree of competition, degree of sport, kind of degree of kind of rivalous interaction, these sorts of reasons might might, you know, bear out. Whether it's kind of competition on the sports field, whether it's kind of intellectual competition, whether it's, you know, any anything that's about again, you know, achievement relative to some kind of standard human benchmark. But by definition, you know, that's exclusive because it's it's, you know, you are saying you are, you know, valuing the exceptional rather relative to the average. And in a world in which everyone's exceptional, no one's exceptional. So, you know, it's not a sort of it's not a something that is going to necessarily provide everyone with well- paid work. where there's a kind of broader possibility I think is around the emotional or empathetic um sort of aspect where we have a taste for a human being for sort of emotional reasons you know say you know end of life care you know it it matters the very fact that a human being is sitting there in those last moments of your life is the thing that matters Yeah. That it's a human being, a fellow person sitting with you, understanding your thoughts, feeling your feelings. Again, you know, you might say that if that's the thing that you value, then it's difficult to see how a machine could ever do that. I do think though in all these different cases, there are limits to the limits. Um, you know, and this is another thing that I try and do in the paper. Ask how robust are these limits to these systems and machines just becoming gradually but relentlessly more capable. Just for instance, you know, think about the aesthetic limits. It might be the case that these systems in the future are able to compose music that is so extraordinarily, you know, moving or a painting that kind of reduc you see it and you sort of burst into of emotional fervor or, you know, a piece of text that somehow captures just so perfectly something you were thinking or feeling. There there it's possible that these systems could achieve aesthetic outcomes which just so dwarf what human beings are capable of doing that actually our attachment to the fact that a human being was responsible for a kind of an aesthetic an aesthetic um you know outcome might just seem you know tied and antiquated you And you don't have to kind of be sort of high and mighty about it. You know, think about a tailored suit. You know, yes, yes, it's lovely to have a suit handcrafted by a human being, but if it's kind of, you know, one arm is shorter than the other and, you know, it pulls in the back and, you know, it's it's a bit tight on the bum and and actually a kind of automate, you know, the kind of sort of the the sort of automated suit is just so extraordinarily comfy, then our attachment to the kind of human craft might might fade away. So, so I think there's limits to these limits as well. And one of the things I I spend quite a lot of time doing is thinking about what these limits to the limits might be. But there's also a third category. It's worth saying there's not simply there's the sort of general equilibrium. These sort of limits these limits of due to the kind of this idea of comparative advantage that even in a world in which machines could do everything more productively, it still might make sense to employ labor doing what's in their comparative advantage. There's then these sorts of preference limits where people have a a kind of taste or a preference for how a particular task or activity is done. And I think there's various ways that might work. But then there are also moral limits where the fact it's not simply that we have a kind of taste or a preference for a human being performing a particular task or activity, but we believe that a human being ought to do that task or activity from a moral point of view. that even if a system or machine could do the task or activity, they shouldn't do it. That there's something important about a human being involved in in the task or activity. And anam an example here might be a judge for example. So deciding in a legal case or we could imagine being a parole officer or perhap perhaps being the person responsible for making sure that military operations are in compliance with international law. Exa exactly right. And I think you know there are lots of kind of micro examples of tasks or activities that have this kind of moral flavor to them where we want to keep a human in the loop essentially. The there's also so there's all these kind of you know there's all these kind of taskspecific sort of moral limits. There's also though the broader issue of AI alignment and you know the the kind of the sort of the the values that sort of direct the AIs that we're building more generally. You might also think that that is an activity that human beings ought to be involved in as well. You know there are moves of course to automate aspects of alignment to remove human beings from those sorts of judgments but you know you might say that's a that's a mistake but again you know I I I think there are limits even to those moral limits you and the legal one is very interesting you know it might be the case that an automated judge is able to reach such a sort of you know wellhoned sophisticated ated, you know, well-designed piece of, you know, legal judgment that it becomes, you know, morally sort of indefensible to use the sort of flawed human alternative. the human judge who, you know, gets hungry before lunch famously and, you know, is, you know, changes their sentencing decisions is stricter. You know, the so or or, you know, on on the battlefield, it might become, you know, morally questionable to, you know, in the sort of increasingly dynamic and, you know, quick nature of conflict. the fact that decisions need to be made so quickly all the time given the sorts of technologies that are being used. The idea that you might send decision down a chain of command to a human being and then up again perhaps in the process losing an edge, losing a person. I don't I don't know that might seem morally, you know, so so there's I I I think there are kind of interesting limits to even the moral limits and and the argument I make in the paper is that well look what what you think about whether there are limits to those moral limits depends in part upon whether you have a moral theory in mind that is sort of process-based or outcome based. you know is when you think about the role of AI in the legal system and whether it is morally acceptable to use an AI or not are the re yeah is it the case that your kind of moral theory is only appealing to outcomes in other words does the system do a better job of or you know more effective job of making a sentencing decision well if that's the that if that's the nature of your moral theory that it's purely based on outcomes. Then clearly there are limits to that, you know, those moral limits because there might come a time when these systems are just so extraordinary capable that the outcomes they deliver are more, you know, are better in some sense than the outcomes that a human human legal reasoning is able to reach. And so, you know, there are limits to that sort of moral objection. But if your moral theory appeals to the process in some way then that might put a break on some of those limits. If your view and and in the extreme if your view is that it is important for a human being to be making these sentencing decisions however good an automated alternative comes that there is something you know fundamentally important about you know say human beings making you know only a fellow human being ought to be able to make the decision whether or not to you know lock someone up for the rest of their life. If that's your view and it's kind of independent of outcomes, if if you're just completely attached to process, well then maybe your maybe your sort of maybe the moral limit is robust there uh because there is no matter how good outcomes become, your attachment to the human process is going to stand in the way. Now I I I I wonder for those who do hold those sorts of pure processbased moral theories for thinking about the impact of the sort of moral limits of AI whether as these technologies just become more and more capable and the outcomes just become that they are able to deliver just become better and better in lots of different domains whether or not people's attachment to kind of pure process-based moral reasoning is going to to hold up. Yeah, it does seem to me that we will probably react to commercial pressures and competitive pressures from from governments or between governments and between companies and so on will make it so that they're they're strong they're strong arguments. They will you know we will see strong arguments for implementing these systems even in the cases where we might imagine that there are moral limits. But in the process of doing so, in the process of kind of accepting these arguments, we will we will disempower ourselves, right? If we take ourselves out of the loop in critical decisions, that's that's something that's difficult to take back just because if you if you have the automated judge or the automated military system, then then you are then you're competitive. And what's the what's the reason for for kind of reintroducing humans back into that process? But in some sense we need perhaps we need the moral arguments given that we might be entering a world in which AIs are simply better than than humans. Do you think that in the end it is the moral arguments that will make the the biggest difference to what we we end up doing? my my view I don't think even in the end I think already today those moral arguments the kind of moral limits to automation are almost more important than the sort of technical limits there are many things that we could use these technologies to do that we don't do not from a kind of technical point of view but often just from a moral point of view even even not also just a sort of kind of cultural point we just feel uncomfortable you know even without appealing to some sophisticated piece of moral reasoning we just feel kind of socially culturally uncomfortable using these technologies in, you know, certain settings. And that feels to me like one of the, you know, we spend a lot of time and as economists, I spend a lot of time thinking about what the technical limits of these technologies are, what they can and cannot do from a sort of technical point of view, but I think these sort of these sort of moral, cultural, social constraints on technology already bind us a great deal. Yeah. My my son is in daycare uh right now and he's going to be 20 years old in 2044. How old is he now? He's a he's 15 months old. Okay. And and so when I when I think of of the pace of AI progress, when I think of how much better these models have gotten over the last five years, say, it makes me wonder what will be left for him to do. What do you think? What do you think the role of 20-year-old people in in 2044 is going to be? I think one of the Yeah, having spent the last decade and a half or so observing and writing and thinking about the impact of technology on work. I think one of the biggest mistakes that we have made collectively is to think that we are clever enough to say to sort of predict which jobs are going to have to be done and as a result what skills and capabilities are going to be most valuable in the future. And there are so many examples of this. You know you there's kind of contemporary examples but there's also historical examples. You know who who would have imagined you've gone back to the 19 you know the sort of late 18th century and at the start of the industrial revolution and sort of whispered to somebody that in a few hundred years time a national health service in Britain is going to employ more people than there are sort of men working on farms in sort of you know Britain people they it just wouldn't make sense you know there barely was you know healthcare uh in the sort of spirit that we have it today and and there certainly wasn't kind of public provision the NHS sort of fifth largest employer in the world or something like that you know just kind it just would have been unimaginable you know the way in which life transformed in the centuries since you know the rise of healthcare the rise of leisure you know the just incomprehensible but but again you know you don't have to go back to the industrial revolution think you know start of the internet era if you had whispered to somebody that in 10 years time people would be finding work as a search engine optimizer wouldn't have meant anything. Or you know even 2019 if you had said you know you're going to grow up and be a prompt maximizer you know just just wouldn't have meant anything because the technologies that transformed our lives and the way in which our economies changed was it wasn't simply hard to imagine. It was almost unimaginable. you know, we just didn't have the kind of, you know, the concepts and so and I think this is, you know, it's just as true today that, you know, if we think about the future and given the pace that you're talking about, pace of change that you're talking about, the idea that we can predict jobs, you know, in a couple of decades time just seems to me to be incredibly hubristic. And so the sort of the kind of principle one of the kind of running themes of my new book is just this the immense uncertainty that we face and that is the challenge that we have to find a way to respond to the the uncertainty. It's just we are setting young people up to fail if we say this is these are the jobs that are going to be available for you to do and these are the skills and capabilities that you must learn in order to do them. And then perhaps one suggestion that will appear in people's mind is that we should we should educate our kids in in a more general sense. We should we should teach them how to learn things. We should give them general train their general reasoning skills. We should give them very general skills that can then help them adapt to to many different future states of the world. Do you do you think that's plausible or is this a form of kind of cope that that we will be able to adapt? I I think the most important thing is that we teach people how to use AI effectively. Yeah. I I think we need to be spending something like a third of our time in school and university learning how to use AI effectively. And I think you know that that is the sort of the most important thing that we can do and when I say use AI effectively I don't simply mean you know how to write prompts and you know get the systems although I think that's important but I also think it's important that you know we teach people you know the history of these technologies where they came from the way in which you know they we need to think about problems in order to use them effectively the sort of the limits of these systems the technical limits you you know, the fact they hallucinate and you know, I expect they're going to hallucinate for some time. The fact they make mistakes, you know, that we're able to and in the event they make mistakes, we're able to understand why and interrogate and then also the kind of ethical and moral issues around their use. You know, I think there is a whole AI curriculum and it's a really exciting project that we need to be writing together, crafting now. And I think that's one of the most important things we could do. And it's not it's not something we can sort of just tack on to the existing curriculum. It needs to be fundamental and it needs to be substantial. And so I think I think something like a third and you know I yeah that that's not just plucked out of thin air. I if I think about so I I spent a long time teaching mathematics and economics to undergraduates at at Oxford when I was a fellow at Bale College at Oxford for for some time and I taught economics across lots of different subjects there. So I taught it to you know people studying politics philosophy and economics or history and economics or economics management. What I did as soon as the I and I I was teaching the college. So I sort of had every year a kind of group of you know 20 students and you know small group but what what I did was in that first year I took a third of their time when they were learning economics just to teach them mathematical methods. So that you know they could then take these tools that they had learned you know and a big chunk of their time was spent learning it but I you know take these tools that they learned into all the different domains in economics that they then be working whether it was labor economics or industrial economics or you know macroeconomics or micro whatever it was but there was a sense in which these sorts of mathematical skills were fundamental and they needed to spend right at the start a big chunk of time learning to use learning them so they can then deploy them in all these other settings and I I think you that is exactly how I think about AI today that it's a technology that we need to be asking how we use it in every discipline and and that requires a big chunk of students time is spent it dedicated to learning about these how to use these technologies effectively and and that's that's one path kind of leaning more into AI Some opposition I've heard to that is that you know as you know you're a you're a professor yourself right and and you must have encountered some homework that seemed to you AI generated and then perhaps wondered whether your students are actually learning anything or whether they are simply using AI to get through the homework to to write the essay to solve the the math and and just handing it in without learning much. And so another direction you could go in is to become almost ly with regards to AI and go back to pen and paper in order to ensure that that students are actually learning something and perhaps then later when they have basic skills when they know when they are good at reasoning writing uh speaking and doing math then reintroduce or then teaching them how to use AI. I think what you're touching on is the kind of, you know, one of the, you know, the fundamental challenges that we face and it really is one of the big problems that I'm setting out to to solve in in the new book. Uh, what should my children do? I I think the challenge for all educators in response is not to go backwards, but to go forwards to ask, okay, well then, how do I make what I teach deeper? How do I make it harder? How do I how do I enable these kids to use these technologies to understand ideas, to solve problems, to make discoveries that would have been unimaginable, you know, before these technologies came around. That that seems to me to be what we ought to be doing rather than clinging to, you know, old-fashioned traditional ways of, you know, teaching and educating. So I think the challenge is how we make you know as as teachers and educators how we make how we how we make you know the the the the substance of what we're teaching you know harder more challenging that the bird the burden you know the ball is in our collective court because it's just I think unrealistic it's both unrealistic to think that everyone isn't going to be using these technologies in years to come and more kind of practically. It's also just it's unfair because we are encouraging we are teaching young people in an environment which simply does not if if we sort of strip it of technology. We're teaching them in an environment simply does not reflect the world that they're going to enter into. Uh we are setting them up to fail. I I think we've got to be asking how we can make it harder, more challenging, more difficult. Do we risk losing some of the weaker students if we do that? So if if you're faced with a task that is that is difficult enough for for the average students to be able to solve it using AI, is that a is that a risk for for the weaker students? No, I don't I don't think so. I mean, no more so than, you know, the the challenge of weaker students in a world before AI. You know o on the contrary I think one of the kind of promises of these technologies is that they are able to tailor what is being taught how it's being taught to the kind of particular strengths the weaknesses of different students. I think one of the big challenges of you know the traditional educational model is that it's it's it's not particularly tailored. You know, we know that one-to-one tuition with a human being is incredibly effective, but it's it's you know, we were lucky to offer it at Oxford in the tutorial system and I saw how effective it can be, extraordinarily effective. But, you know, most institutions can't afford to have one tutor per one or two or three students. and and the promise of these technologies is that it can replicate the kind of interaction that you might have with a human tutor but do so at a far lower cost. And so that's one of the things I think that's very exciting. Uh and you know people talk about personalized learning and you know and and things like that that we've really kind of barely scratched the surface. Um and there's a lot more for us to do and explore. Do you think there are lessons for for parents here about how how to whe whether or not to to kind of pace your child into a certain in a certain direction? For example, 10 years ago, it it seemed like programming, learning how to program that was that was uh the the perfect kind of path forward. And now it turns out that it's exactly what these systems are good at doing. Yeah. Now it's now it's unclear because reasoning models are exceptionally good at programming at mathematics and so on. So you empathize or you you you spoke about the uncertainty of the future and how should parents react to that? I think the biggest risk among parents is the legitimate concerns about the impacts of smartphones and social media on young people. And I think, you know, there are real issues around social media in particularly and the mental health of young people. But what I really worry about is that legitimate concerns about those technologies kind of seep into how parents think about AI as well. AI is not the same thing as social media. And if we bundle technology into a kind of monolithic indivisible lump of bad stuff, parents are going to let down their kids. in preparing them to use these technologies. So that is the kind of biggest warning I have for parents at the moment which is you know I share many of the concerns that are out there at the moment about smartphones and social media but AI is a very different beast and it's a mistake to conflate them and to conflate the kind of concerns we have about the former with the kind of extraordinary opportunities with the latter and and that's part of the sort of the philosophy of the book. as a as a final question here. There are groups of people and here you're thinking about children perhaps the elderly and so on that have more trouble than than other people kind of adapting to change. Um kids tend to like structure elderly people tend to dislike change and so on. How do you think about those groups in society when you're thinking about a a more radically uncertain world, a world that's changing faster? what what should we do on a on a societal level to help those those groups thrive? The new book is focused exactly on that first group because I think you're exactly right. I think you know we live in an age of anxiety where almost every day we are told stories of the sort of existential challenges that we face and kind of reminded about our you know incapacity for dealing with them. And I think you know there there is a hope that has driven many parents on before me that if they work hard and if they love their families and look after their families their children's future is going to be better than their past. There's a sense that you know and yet I think now for the first time in some time there's uncertainty about that. Uh I think many parents are not sure that if they keep their head down and you know work hard and look after their families that their children's future is going to be better than their own. And and that's quite a dangerous thing when people lose faith in the future. And that is I think a situation that many parents find themselves in. And one of the reasons I'm in contrast hopeful about the future is because of the possibilities of AI and the possibilities of AI particularly in the educational setting. I think we can if we get it right use it to do really extraordinary things. I think the kind of traditional education system is broken. Not enough people have access to a good enough education. Not enough people are adequately prepared for the world that exists, you know, beyond the kind of artificial environment of school and university and and you know, the possibilities of these technologies both to in thinking about what we teach and how we teach and when we teach are really extraordinary. So yeah, that's why I'm spending a lot of my time at the moment trying to gather all these thoughts together so that exactly as you say, we can help this group of people, you know, young people really flourish in in the world in the world to come. Great. Daniel, thanks for chatting with me. It's been a real pleasure. Such a pleasure. Thanks so much, Gus.