00:00:00 ◼ ► We have been threatening for many months to talk about AI again. It's a thing that's been on our list. It's an area we wanted to return to.
00:00:08 ◼ ► And then, I know, a little while ago you said to me, "Hey, do you know that it's going to be 10 years since Humans Need Not Apply was published, coming in August?"
00:00:19 ◼ ► And then it was like, "Well, that's when we'll return to it then I guess, because can't miss that."
00:00:23 ◼ ► I feel like I sealed my own fate with this. We've been threatening to revisit AI, but it feels like, who have we been threatening? Not the audience, but ourselves.
00:00:32 ◼ ► I feel like, yeah, it's like, we'll talk about Humans Need Not Apply 10 years later and all of the rest of it, but I do have to say it's like, boy, this is a topic like no other topic.
00:00:42 ◼ ► It makes me feel kind of like, ill and overwhelmed to talk about. It's just like, oh god, it is all of the everything for all of the future. How do you even begin?
00:00:56 ◼ ► Let's begin by talking about Humans Need Not Apply. So this was a video that you made 10 years ago now.
00:01:03 ◼ ► What was this video to you? What drew you to make this video? Because it was a very different landscape a decade ago to where we are now.
00:01:12 ◼ ► It's interesting. I rewatched it this morning in anticipation of the show and god, it's like, I don't know how long it has been since I've seen it.
00:01:20 ◼ ► Maybe like seven years? I have no idea. It's been a long time because I don't tend to watch the older stuff.
00:01:27 ◼ ► But when I do rewatch the older videos, it does often put me in the place where I was when I was making it.
00:01:35 ◼ ► It's like I'm having PTSD for memories of picking the stock footage. It's like, oh yes, I remember that clip wasn't long enough and that's why I had to reverse it halfway.
00:01:50 ◼ ► Yeah, it's surprising how much it can take me back, but I think it's because I sort of make these things under such an intense situation and such an intense focus.
00:01:59 ◼ ► But my main motivation for making it at the time was just, it's sort of like when we first talked about AI on this show.
00:02:09 ◼ ► We talked about it when we did because I had this feeling of like, oh, I could like see these things that are around and I just don't feel like people are talking about them fully or like as aware.
00:02:19 ◼ ► And like at the time I made that video, I felt like just this kind of like concept of maybe the automation this time is a different thing was not so much in the public consciousness.
00:02:36 ◼ ► I felt like 10,000 different kinds of conversations have happened about self-driving cars since this point in time.
00:02:43 ◼ ► There have been highs, there have been lows, but I just felt like, oh, I don't think this is being discussed as much as it should be.
00:02:51 ◼ ► And so, yeah, I felt like this is a really big important topic for the future that I felt sort of grim about in some ways.
00:03:06 ◼ ► I don't often feel this way when I'm making videos, but I feel like this is one of the rare ones where it's like I'm making this for part of the public conversation.
00:03:17 ◼ ► Whereas like normally I'm making a video because it's more like I'm interested in the thing and I want to talk about it.
00:03:24 ◼ ► But this one did feel like it was to be part of the public conversation around this topic was much more of the motivation at the time.
00:03:36 ◼ ► Yes and no. I think using this doc footage and having it be real people is like, yes, I think that's more accessible at the time to a wider audience.
00:03:45 ◼ ► Like I don't think that would really matter now, but 10 years ago I think it did matter a little bit.
00:03:49 ◼ ► But honestly, the main decision was I can't animate a thing that's going to be a long video.
00:03:56 ◼ ► I knew it was like, oh, this is going to be 15 minutes, which at the time was like an insanely long video.
00:04:06 ◼ ► And I thought, oh, if I also have to animate this in like the normal stick figure way, it will take absolutely forever.
00:04:19 ◼ ► And also, I think it just aligns with the topic better because I want to be able to show a bunch of things.
00:04:26 ◼ ► And then this way I'm not like switching back and forth between animation and the stock footage.
00:04:31 ◼ ► It's like almost entirely stock footage all the way with like a couple of cuts to me at the desk.
00:04:45 ◼ ► And just watching it, I'm trying to like imagine how you would make it now, how different it would be.
00:04:54 ◼ ► I wasn't sure why you had made stock footage, but what you gave is one of the reasons I thought.
00:05:00 ◼ ► I also just thought that maybe you were just going for a different vibe and like maybe you hadn't even then like found your full vibe.
00:05:19 ◼ ► I knew that this audio part here wasn't great, but I wasn't entirely sure how to do it better.
00:05:26 ◼ ► I'm like, ah, yes, this is still like earlier in the career and it totally shows. It totally shows.
00:05:30 ◼ ► I can hear some cuts, which I know is a thing that we've spoken about before that like the ADR thing, which I hear, you hear, but most people have no idea.
00:05:42 ◼ ► And one of the ways it is interesting is you are met with your skill difference in a way that people aren't usually in their work.
00:05:52 ◼ ► But like we can go back to you from 10 years ago. We can go back to me from 10 years ago and you can hear the differences in our ability.
00:06:09 ◼ ► That's actually the main thing that I'm aware of is like that past Grey doesn't fully yet know how to use his voice in this medium.
00:06:20 ◼ ► Like a lot of the earlier videos to me very strongly, like they're still being made with kind of the idea that this is a public presentation.
00:06:33 ◼ ► I'm not actually sure if I've ever said this before, but for years I was always kind of wondering if there was a direction where this really would be a career that would become like a stage show in some way.
00:06:45 ◼ ► I was legit wondering like is there a version of this where I'm doing like a live presentation.
00:06:51 ◼ ► And so lots of the videos were still kind of framed with this idea of like actually giving it in front of an audience.
00:06:58 ◼ ► Now it's real funny like I use that concept now all the time in the videos of like we have like the theater in which it's taking place in front of.
00:07:16 ◼ ► It's like this is never really going to be a presentation and if it's not you can do it in different ways.
00:07:24 ◼ ► I was like oh past gray like yeah uncertain is a good word for it or it's like tentative in certain spots.
00:07:35 ◼ ► I think particularly in the context of our conversation last time about being better is like ah yes yes I could really see the difference between now and then.
00:07:45 ◼ ► And especially given my life now it is quite comical that given what I am currently doing.
00:07:59 ◼ ► You have no idea 10 years from now what that current gray will be doing that he thinks is like a long video versus what you think is a long video.
00:08:12 ◼ ► I actually really like the way that you said it you feel like you didn't know how to use your voice.
00:08:15 ◼ ► You're not emphasizing words and phrases in the same way you don't have the same sense of like presence.
00:08:22 ◼ ► But the thing was is like what we can't do when we watch a video like this is actually take ourselves 10 years back in time as a viewer.
00:08:34 ◼ ► That's what I was trying to say also with like the decisions about the stock footage of what like it was a very different landscape at the time.
00:08:51 ◼ ► It's like I actually for the most part I was pretty surprised at how well I think the video holds up.
00:08:57 ◼ ► I feel like there's this clear line of like I think it basically gets better the longer it goes on.
00:09:02 ◼ ► Like the shakier parts or the earlier parts but I was like oh this does hold up pretty well.
00:09:06 ◼ ► I'm not 100% sure but I do feel like this was the most maybe it wasn't the most popular video on the channel.
00:09:13 ◼ ► But I feel like it was like the second most popular for a really long time and I'm not surprised why.
00:09:31 ◼ ► And I don't know that this was the first thing that I saw of yours but I know it was definitely among the first that I remember of like being you.
00:09:46 ◼ ► I look at that video and I think oh that video was successful in what I wanted it to do.
00:09:51 ◼ ► Because I'm currently in the position it's like oh I get to go to conferences and I get to meet interesting people.
00:10:04 ◼ ► And the thing that they tell me is they say like that's the first time I came across the concept of technological unemployment.
00:10:14 ◼ ► Or like oh that's the first time I really thought about what does it mean if this occurs.
00:10:21 ◼ ► And to me it was like ah great like the thing did the thing that I kind of wanted it to do is try to like reach people with this idea is out here you might not have heard it before.
00:10:37 ◼ ► So it's like ah yeah yeah it is totally been successful to me and it's just like interesting over the years that you know there's a very small handful of videos that people will reference when they meet me.
00:10:46 ◼ ► It's like ah this is the one that I really like or this is the one but like the humans need not apply one.
00:10:51 ◼ ► It's always the same thing someone's like ah that's the first time I ever thought about this idea seriously or it's the first time I ever came across that idea.
00:11:03 ◼ ► Definitely not but I think it probably was one of the best videos for the topic at the time.
00:11:16 ◼ ► You know like not only was it good it was also the first time that a lot of people including me was faced with confronting the idea of what you call software bots professional bots and creative bots.
00:11:37 ◼ ► I was in my old bedroom which at that point because of like where I was like I had converted this room like I was kind of getting it ready for podcasting.
00:11:48 ◼ ► You know we were starting things at Relay like then and so I was kind of like rearranging things.
00:11:54 ◼ ► And I remember I think I was talking to Steven about this video and I remember saying that like it's really good but I know that it will never take my job.
00:12:15 ◼ ► So that's interesting because you've just hit on the thing that I wanted to say which is there's a fundamental problem in making videos in particular and videos that are talking about a topic that I just think there really isn't any way to solve without becoming very very boring in the making of the video itself.
00:12:33 ◼ ► But I'm always really aware that there is no way to communicate different levels of seriousness or different levels of confidence easily in an explanation for the viewer without being tediously self-referential all the time which is just very hard to listen to.
00:12:55 ◼ ► And the thing watching that video is again I think the beginning parts are the worst parts but it's also because I remember structuring it such that the things that I'm leading in the beginning they're there because they're the physical things that you can look at.
00:13:11 ◼ ► But the part that was really important to me and it's why I like the video more as it goes on and I think you can see the argument starting to build is like you person watching this who like works on a computer this is where the real problem is coming later.
00:13:30 ◼ ► It's like we're talking about the physical things but that's the part that I felt was really under talked is like the creative class and the intellectual class of workers had always viewed themselves as apart from these sort of things.
00:13:47 ◼ ► And it's like no no no this is coming and that's why I can look at this video and feel like I'm pretty pleased with that.
00:14:03 ◼ ► I feel like for me when I was writing it the important part was building to that second half of like there's a huge number of people who think that this will not apply to them and I am telling you now that it is coming.
00:14:22 ◼ ► So it is a real delight to me to know that like you watch that video and you get to the end and you're like oh well not me though. You are the person I'm talking to when I say the line something like maybe you think you're a special creative snowflake and there you are Mike you're going and I am.
00:14:39 ◼ ► But I remember the feeling though I remember the feeling which was denial like I remember the feeling where it was very much a I need to tell myself this.
00:14:55 ◼ ► I think over the last 10 years the most sustained thing that I have seen when reference to this video has always been about the self driving cars of it all.
00:15:09 ◼ ► Oh yeah like over the course of the 10 years now obviously the last 2 of those 10 years the crux of the video I think has actually come to bear right with what we now call AI but it's large language models but autos has been the thing which is what you refer to the self driving cars in the video.
00:15:28 ◼ ► You try to brand it which I still like the branding but I think it didn't work partly yes it's like I would have been quite charmed if the word autos had taken off but like it wasn't going to happen but it was also trying to solve a thing in the video which is like.
00:15:42 ◼ ► I only do it just a little bit but it's like you need to think about self driving vehicles of all kinds like the one part I was like oh that totally has come to pass is like the automated warehouses and it's like yeah yeah.
00:15:56 ◼ ► Those are teeny tiny autos in the way that I mean it it's like a little self driving thing it's like that's the other thing that I was trying to do there but the self driving car stuff is like what a lot of people think about that video as primarily being about.
00:16:09 ◼ ► And it's like that is where the totally fair criticism comes and like oh my timeline for the self driving cars 10 years ago was significantly shorter than it has turned out to be and I had just the funniest coincidence this year because it's like what's my timeline I was like 10 years from now.
00:16:31 ◼ ► I expect like I'll just be able to like order a self driving car and get it in and go and like whatever like they'll just be common like taxis in some sense and the funny thing is like this year I was out.
00:16:44 ◼ ► I was out in the desert for many months working on things and one of the places that I happen to spend a huge amount of time was Phoenix.
00:16:53 ◼ ► And Phoenix has that Waymo project with the self driving cars but the thing that was really interesting that like I caught in myself was like I found them almost so unremarkable in a way that I was so busy with other things while I was there.
00:17:11 ◼ ► I didn't even take the time to try one out but I'll tell you driving around Phoenix they're all over the place and if you look inside them every single one has the same thing what looks like a family of tourists filming the empty car that's driving them around Phoenix.
00:17:26 ◼ ► So I was looking at that I was like oh this is like a funny thing 10 years later I happen to be in a place where I could do the thing that I was kind of thinking was the benchmark but it is not the way I was thinking about the benchmark at the time.
00:17:38 ◼ ► I was thinking about them as being like common and everywhere and it's like oh no no no no they exist in Phoenix and they exist in San Francisco in the way that I was thinking of them.
00:17:49 ◼ ► And we can have a kind of like asterisk on Tesla for like sort of kind of if you're in the beta asterisk asterisk asterisk that's not what I was thinking so like that mental timeline was totally wrong and totally off.
00:18:03 ◼ ► I would not say I don't think it will ever happen like all cars are just self driving but my likelihood of it happening is less now than.
00:18:15 ◼ ► Can I ask you what your reasoning is for that what are you what are you thinking what are your reasons for that?
00:18:19 ◼ ► The closer we have gotten to it happening it seems like there is more and more rejection of the idea.
00:18:25 ◼ ► There's a line that really stuck out to me where I was like ahh past grey you're not considering something where I say something like they don't need to be perfect they just need to be better than people.
00:18:35 ◼ ► And it's like I was like ahh that's the wrongest thing I've said in the video like I just did not appreciate how much people demand perfection they don't care that it's better they want it to be perfect.
00:18:53 ◼ ► The issue is if you're taking the human decision making out of it I think en masse people want no decisions to be made they want perfection.
00:19:01 ◼ ► And I understand the emotional argument I understand the logical argument and I think the emotional argument is going to win every time.
00:19:09 ◼ ► Yeah I think the thing that I was not conceptualizing there is what I was trying to think about is how would I convince past me to take that line out of the video.
00:19:20 ◼ ► And I think my argumentation would be something like people are going to demand that this is incredibly safe for the same reason that airplanes have to be incredibly safe.
00:19:32 ◼ ► If a death is going to occur people would much prefer that it was their own fault that the death occurred versus being more safe but the death is someone else's fault and not under their control.
00:19:56 ◼ ► Someone's driving me and we might all die and I will have no ability to control this and people are like much happier to be less safe but have more control.
00:20:09 ◼ ► I think the reason we demand safety of airplanes is the catastrophe looks and feels and is worse right if a plane crashes.
00:20:27 ◼ ► I feel like with the car thing people want to blame someone and you can't blame the computer.
00:20:35 ◼ ► If there is an accident caused by a driver we want to as humans be able to say it was that person's fault they caused this. Instinctively that's what we're looking for and it is really hard to blame the algorithm.
00:20:54 ◼ ► And also the ones and zeros of it all means there was another choice the computer could have made and it didn't make that choice.
00:21:05 ◼ ► We know we're more complicated than that and we know we can make other choices but we also know we can fundamentally understand that human beings are only able to make the choice they're able to make in that moment.
00:21:16 ◼ ► We're weirdly more deterministic sometimes when it comes to things like that. You can't see every possibility that is available to you where in theory the computer can.
00:21:27 ◼ ► And also there's the predeterminedness of it all that people don't like too which I understand.
00:21:32 ◼ ► Whether it's true or not but the idea prevails that you can code the car to make a choice and that's in its programming.
00:21:41 ◼ ► I just think all of these things are more complicated to the point that every time there is an accident caused by a self driving car there are articles written about it.
00:21:51 ◼ ► And that's what makes me think very much of plane crashes. Every time there's a plane crash there's an article that's written about it and every time there's a self driving car crash there's an article written about it.
00:22:02 ◼ ► And it's like for me I don't even really know where I stand on it. I think self driving makes me feel uneasy. Which doesn't make any sense. I cannot tell you why.
00:22:14 ◼ ► I feel like the way you answered that really tells that. That is the answer. Everybody's vulnerable on an airplane.
00:22:23 ◼ ► I see what you mean. People just are more emotionally vulnerable on an airplane. Yeah no that is a true statement.
00:22:29 ◼ ► I've been reading the West Wing and I started watching it at home and rewatched it on a trip and I try and find a show that I mostly keep when I fly.
00:22:42 ◼ ► Yeah I know I can't think of a specific example but I too know I have felt real dumb for a big emotional reaction to nothing on an airplane. I have had that yeah.
00:22:52 ◼ ► I find the self driving stuff hard to think about in some ways. It feels like it's the most extreme version of the quote about technology.
00:23:01 ◼ ► Of like the future is already here it's just not evenly distributed. Like I really had that feeling in Phoenix where it's like it's so weird that these cars just like really don't have a driver in the front of them.
00:23:14 ◼ ► And they're just like driving around and it's so normal it's like I very quickly found it kind of boring and unremarkable.
00:23:22 ◼ ► But obviously there's like a thousand reasons why it's working in Phoenix and it's not working in other places.
00:23:28 ◼ ► This is now the second time I'm at my parents this year and using the car that has the self driving beta on it.
00:23:35 ◼ ► And I was so impressed last time and now that I'm here again the difference between a couple of months ago and now I find it absolutely shocking like how much better it even is than the previous time.
00:23:51 ◼ ► And when we talk about technology changing I was like digging into the details because I was like oh my god I just cannot believe how different the car is now.
00:23:59 ◼ ► And I was like ah yes the thing that happened which we discussed a little bit previously but it's like oh the self driving system changed and it's like all of the human written code is gone now.
00:24:21 ◼ ► So it's not trying to be like a real stickler about the speed limits and the stop signs and everything else.
00:24:28 ◼ ► It's so spooky because when I was with my dad last time and I was teaching him how to use the system which he loves by the way so my dad's still just like self driving himself all over North Carolina.
00:24:39 ◼ ► Just for context we had spoken about this on Mortex I think like last year so like when you're remembering we spoke about this we had spoken about your experience the last time you were at your parents' Mortex.
00:24:51 ◼ ► We did that precisely because all of this stuff is like a real contentious topic sometimes but here's the episode where it's gonna be contentious.
00:24:59 ◼ ► We're doing it anyway look we know wall to wall this one's contentious so you might as well get it all in you can hide some stuff in this one.
00:25:09 ◼ ► But yeah like the thing that I was talking to my dad last time about was like this car is self driving it won't drive like a person but that doesn't mean that it's wrong.
00:25:21 ◼ ► So like it's doing all of the things it's just not going to do it the way that you would but currently it's like oh this neural network it's like what did they train it on?
00:25:31 ◼ ► They trained it on hundreds of thousands of hours of video of humans driving and it is like spooky is the word that I use because like I've had long experience with these systems.
00:25:43 ◼ ► I've always been very interested in seeing how they work and it is spooky because it really feels like a person is driving the car in a way that it never has before.
00:25:54 ◼ ► Like it really acts and drives the way that a person does it doesn't have any more of that like you have to think about it like a different thing but it's not wrong it's still able to do this.
00:26:06 ◼ ► It's like no no no now it merges it treats stop signs it treats small little streets very much like a person does and it's like of course it does because the only thing it's looked at is how people drive.
00:26:20 ◼ ► And so I've just been thinking about that a lot because that is in the context of many of these other things that are related to AI.
00:26:34 ◼ ► All of these like systems and technologies in our lives where we have automation and like people have been explicitly programming them to do things.
00:26:43 ◼ ► Increasingly they're going to be systems that are just looking at human output and learning from human output and like trying to mimic that or do that better.
00:26:55 ◼ ► That's the thing in the humans need not apply video at the very end I talk about it like just a little and it's like ah yeah yeah it's like I've done some of that kind of stuff in college like I'd seen the earliest parts of this kind of work I do it was coming.
00:27:09 ◼ ► But it's like real weird to be here 10 years later and have both sides of this of like ah all the self-driving stuff all the physical stuff in the real world with physical automation.
00:27:26 ◼ ► We went through this what I feel like was a kind of a little bit of a technological lull even on the software side of like it doesn't seem like things are panning out.
00:27:35 ◼ ► And then all of a sudden in the last two years the very last part of the video that I was talking about with software bots and things that teach themselves.
00:27:46 ◼ ► It's like oh man that is here and with the self-driving car system it's like I can really see that now feeding back into the physical stuff.
00:27:56 ◼ ► And obviously we have all of it with just the pure digital stuff and there's many ways in which I just don't know how to think about all of this like it's really quite overwhelming to think about so yeah.
00:28:09 ◼ ► But yeah that's kind of my feeling is like the physical has been much slower than I expected.
00:28:15 ◼ ► And the software was slower than I expected for a while but the last couple of years have been terrifyingly fast.
00:28:23 ◼ ► And I would not dare in this moment attempt to meaningfully project forward 10 years of progress in the same way as I did 10 years ago.
00:28:43 ◼ ► Whereas now if I try to project forward 10 years it's something much more like more soon different later and like the ability to be confident about what different means is very very low.
00:29:01 ◼ ► If you're looking to change your fitness level it can be really hard to know where to get started.
00:29:06 ◼ ► That's why I want to let you know that Fitbod is an easy and affordable way to build a fitness plan that is made just for you.
00:29:16 ◼ ► That is why Fitbod uses data to make sure they customize everything to suit you perfectly.
00:29:21 ◼ ► It adapts as you improve so every workout remains challenging while pushing you to make the progress you're looking for.
00:29:28 ◼ ► You're going to see superior results when you have a workout program that is tailored to meet you exactly.
00:29:34 ◼ ► It's to fit your body, it's to fit the experience you have, the environment that you're working out in and the goals that you have for yourself.
00:29:45 ◼ ► Which will then track your muscle recovery to make sure that you're avoiding burnout and keeping up your momentum.
00:29:51 ◼ ► And also by making sure that you're learning every exercise the right way you're going to be ready to go.
00:29:57 ◼ ► Fitbod has more than a thousand demonstration videos to help you truly understand how to perform every exercise.
00:30:03 ◼ ► Fitbod builds your best possible workout by combining exercise science with the information and the knowledge of their certified personal trainers.
00:30:12 ◼ ► Fitbod have analyzed billions of data points to make sure they're providing the best possible workout to their customers.
00:30:18 ◼ ► Your muscles improve when they work in concert with your entire musculoskeletal system.
00:30:28 ◼ ► This is why Fitbod tracks your muscle fatigue and recovery to design a well balanced workout routine.
00:30:34 ◼ ► You're never going to get bored because the app mixes up your workouts with new exercises, rep schemes, supersets and circuits.
00:30:42 ◼ ► The app is incredibly easy to use. You can stay informed with Fitbod's progress tracking charts, their weekly reports and their sharing cards.
00:30:49 ◼ ► This lets you keep track of your achievements and your personal bests and share them with your friends and family.
00:30:54 ◼ ► It also integrates fantastically with your Apple Watch and Wear OS smartwatches along with Strava, Fitbit and Apple Health.
00:31:01 ◼ ► Personalized training of this quality can be expensive, but Fitbod is just £12.99 a month or £79.99 a year.
00:31:14 ◼ ► So go now and get your customized fitness plan at fitbod.me/cortex and you will get 25% off your membership.
00:31:30 ◼ ► So we've spoken about the autos, obviously the bots, the AI is the thing that's changed.
00:31:40 ◼ ► What's so funny to me is the last times we spoke about this in detail, this has come up a lot over the intervening two years,
00:31:48 ◼ ► but we did our back to back episodes, 133 and 134, recorded in September and October 2022 respectively,
00:32:04 ◼ ► That is true. In one of the episodes you were telling me about a thing that you had seen that had told a joke.
00:32:12 ◼ ► And in the show notes for episode 134 there is a link that says "Using GPT-3 to pathfind in random graphs."
00:32:20 ◼ ► Like I'm sure there was a version of it out there, but we weren't able to use it. Like it came afterwards.
00:32:26 ◼ ► Especially the September episode and I'm pretty sure the October episode, but it was not a thing that we had access to when we recorded those.
00:32:54 ◼ ► Oh yeah, yeah, that's why we've not spoken about it in detail since. They follow me around.
00:33:10 ◼ ► I feel like what humans need not apply has been for you, those have been for me over the last couple of years.
00:33:24 ◼ ► Or they have been successful episodes of the show, so the YouTube comments are still coming in about them all the time.
00:33:36 ◼ ► And I'm gonna be honest, I like to be as prepared as I can be for the episodes that we do.
00:33:51 ◼ ► I always spend a bunch of time kind of pre-thinking through what are we gonna talk about.
00:33:56 ◼ ► Having lists of things to point to, I want to try to have a couple of specifics on hand if I know we're gonna talk about something.
00:34:06 ◼ ► And this morning while I was getting ready for this show, I just really felt this thing like I cannot bring my mind to heal on this.
00:34:19 ◼ ► I cannot get my mind to focus on this in a way that I would normally prepare for the show.
00:34:33 ◼ ► I've been fairly isolated from the world the past several months where I'm working on the next video project.
00:34:39 ◼ ► And I didn't even really realize it, but one of the things that I was doing that made a big difference was...
00:34:48 ◼ ► I have a bunch of places where I go to try to get an aggregation of the AI news and what has happened.
00:34:55 ◼ ► And I was finding months ago, the amount of news and the amount of change was so rapid and so much that I found it genuinely...
00:35:24 ◼ ► And it's why when we've been thinking about the AI episode for two years now, it's always been in the back of my mind.
00:35:31 ◼ ► I was like, "Ah, next time we talk about AI, I'm gonna be the most prepared boy in the world. I'm gonna have all these links.
00:35:38 ◼ ► And when time came around, I was like, "I just I kind of can't emotionally do this because it is very hard and it touches on absolutely everything."
00:35:46 ◼ ► And it is also the thing in my own personal and professional life that it's almost every conversation, the moment it starts touching on the future in any way,
00:36:14 ◼ ► And what I also find particularly dispiriting is, again, not surprising, but like so many other things, but faster,
00:36:23 ◼ ► I have been shocked about how this topic has divided itself into teams of people who are like rabidly in different corners.
00:36:34 ◼ ► And for perhaps the most important topic ever, it has very quickly become near impossible for humans to have a coherent discussion across teams about this.
00:36:46 ◼ ► Which is also part of the reason that I feel like I have been dreading ever bringing the topic back up again.
00:36:52 ◼ ► Because when we discussed it at the time for those two episodes, it was still fresh enough that lines had not quite been drawn.
00:37:07 ◼ ► And it almost, I don't know if this is too far, I don't like to talk about this publicly very much, but it almost kind of gives me the feeling of like,
00:37:17 ◼ ► "Why is it that in the course of my entire career, I have essentially never discussed politics directly?"
00:37:29 ◼ ► Because the team lines have already been drawn, like there isn't a real discussion to be had here.
00:37:34 ◼ ► I like talking about the systems of things, but talking about the particulars, it feels like a pointless kind of conversation to have.
00:37:42 ◼ ► And I feel dispirited because that flavor of politics feels like it has infected AI somehow.
00:37:52 ◼ ► It's that same kind of thing where people are really tying up worldviews in their positions on AI.
00:38:01 ◼ ► And so then it is like, ah, the worldview has come first, and that determines the position on AI.
00:38:14 ◼ ► This is the most political thing I've ever spoken about in the responses that I get from people.
00:38:22 ◼ ► I've spoken about politics, I've spoken about AI, and sometimes what is so interesting to me,
00:38:29 ◼ ► and I know it's going to happen to this episode like it's happened every time I've been speaking about it recently,
00:38:33 ◼ ► because obviously Apple intelligence is a thing that exists, Apple's into AI, so I've been talking about that.
00:38:49 ◼ ► Because for that reason, when Apple's now put it into the platforms and Google's putting it, you can't avoid it.
00:39:05 ◼ ► and I will get responses from differing camps where both people are unhappy with the thing that I said.
00:39:23 ◼ ► And that's why I'm saying, it is so interesting to me the ways in which people are upset about this
00:39:41 ◼ ► the things that I say and the things that I believe, there may be some people that would just never listen to the stuff that I make.
00:40:07 ◼ ► People that I hold close to me, people that I work with, their opinions have diverged massively over the last six months still.
00:40:26 ◼ ► I couldn't bring myself to do it, but then what drew me to being comfortable in not having done that,
00:40:32 ◼ ► and in breaking a rule for me, which is always to be the most prepared that I can ever be,
00:40:48 ◼ ► I think it is incredibly important to remember that people can change their mind about things,
00:41:32 ◼ ► You have to be able to just let your opinion change and morph with more information that comes to you,
00:43:51 ◼ ► I don't have any idea really what your current thoughts about any of this AI stuff are,
00:44:38 ◼ ► Because currently AI is creating jobs, whether they'll stick around or not, we'll find out,
00:45:12 ◼ ► There are more quick think pieces that are being published every day than there was in 2007
00:45:28 ◼ ► large language models are the biggest jump since the App Store and the creation of the smartphone.
00:45:39 ◼ ► And then before then was, I don't know, Print Impress, I don't even know what you would say technology was,
00:45:54 ◼ ► if we're going to agree on those potentially, how fast, how that's shrinking the timeline of big leaps.
00:46:03 ◼ ► If you think what was the one before now, it was VR, but now we know that one actually wasn't real, realistically.
00:46:10 ◼ ► VR/AR was, this is something I was saying a long time ago, was perceived by most technology companies
00:46:17 ◼ ► to be the next big thing, but it turns out large language models are probably the thing
00:46:22 ◼ ► which will have the biggest change. However, what I will posit, I'll like, swam with the places that my opinions are,
00:46:47 ◼ ► it felt like the inevitability of AI replacing everything was going to just be around the corner at any one moment.
00:46:57 ◼ ► For me, I do feel like the further we get into this, actually the further that is being pushed,
00:47:18 ◼ ► If Disney replaced all of their animators in January of 2023, I think they would have been able to do that easier
00:47:30 ◼ ► I 100% believe that people's jobs will be replaced, but I do think now I think it is less people than what I thought when we spoke about this last time.
00:47:44 ◼ ► I think there are two parts of it. I think that it is harder for people to be able to do these things,
00:48:01 ◼ ► or whether they believe that it will affect their bottom line from the way that people will approach their products.
00:48:30 ◼ ► I'll give you an example. A couple of days ago, I wanted some historical information from ChatGBT.
00:48:47 ◼ ► Because the show is nearly 10 years old and Relay is 10, so I wanted to like, you know.
00:48:53 ◼ ► So I was like, what was Apple doing in 2014? Provide me links to articles about this stuff.
00:48:59 ◼ ► And it did a good job. It gave me a bunch of things, and it gave me a bunch of previews, and it gave me a bunch of links.
00:49:05 ◼ ► The links were all correct, except for every link had two characters in it that it made up.
00:49:28 ◼ ► And I think the hallucination stuff has become a problem that I don't think is solvable in the realistic future,
00:49:57 ◼ ► And I think for us in the same way that we don't trust a car to drive because it might crash,
00:50:04 ◼ ► I think that people are resistant to wholesale trusting AI because it might make things up.
00:50:29 ◼ ► but I think the whole scale replacement that I was worried about feels further away if ever
00:50:43 ◼ ► And these models don't, I won't say can't, but maybe can't at like what we have now, right?
00:50:57 ◼ ► I don't know if that will ever be 100% perfect. In fact, I feel very confident it won't be 100% perfect.
00:51:04 ◼ ► The thing that replaces this, maybe, but like I can't foresee that because I don't know that.
00:51:18 ◼ ► I am very interested in tools that can surface my information to me. That is really interesting to me.
00:51:28 ◼ ► Like you have this LLM and if I can feed my information to it and get stuff back from it,
00:51:34 ◼ ► I find that kind of stuff to be useful. And that can even be, I've written this paragraph.
00:51:58 ◼ ► When you say accept that, what do you mean by accept that? Like you don't think you'll ever use that or?
00:52:13 ◼ ► Like I see things where it's like, "Oh, that's very impressive." But I wouldn't use that.
00:52:22 ◼ ► And also I do think that there is a moral issue and a hypocrisy issue that I cannot push through.
00:52:35 ◼ ► That companies that build LLM's and want to productize them do that on the back of other people's work
00:52:55 ◼ ► These are in everything but I feel a little bit better when if somebody provides their own information
00:53:02 ◼ ► or provides something they have done to a model to ask the model to clean it up or improve it,
00:53:08 ◼ ► that feels better to me than just like, "Make me a picture of a dog with a hat and I'm going to do something with that."
00:53:28 ◼ ► But yeah, I feel like I've done the thing that I did in those two episodes where I just said like a bunch of stuff
00:53:34 ◼ ► and like I don't really remember all that I said but these are my feelings about where I am right now.
00:53:41 ◼ ► Well what you've done, I just sort of wanted to hear you go through all of this because I just feel like like no other topic,
00:53:52 ◼ ► Which is why it's like, "Oh, you can kind of go up and down like broader, narrower, specific future path."
00:54:06 ◼ ► It's like, "Oh this stuff perhaps for intellectual work is the most general purpose thing that has ever been artificially created."
00:54:14 ◼ ► And so that's why it's just so hard to talk about it in any kind of limited way without having a touch on absolutely everything.
00:54:23 ◼ ► And again, to keep something high level, you talk about like the hallucination problem.
00:54:37 ◼ ► I feel like this was a confabulations day had arrived as like this was the word for the thing.
00:55:02 ◼ ► But I would say that you are right that my take on this is it is an unsolvable problem.
00:55:08 ◼ ► Because there have been a number of papers which have done the thing of formally proving the sort of thing that I have discussed previously when we've talked about.
00:55:21 ◼ ► It's like we now know as certainly as we can know that it is fundamentally impossible to trust the internal process of these kinds of systems.
00:55:42 ◼ ► It's a kind of math proof that no, you can never be absolutely certain that you know internally what the system is actually doing.
00:55:57 ◼ ► And that includes hallucinating and it includes things like intentional deception, right?
00:56:24 ◼ ► You're never going to be sure that the thing is not making an accidental mistake or intentionally deceiving you on behalf of some other entity that has instructed it.
00:56:48 ◼ ► It started on Reddit that somebody had gotten into the prompts that are part of Apple intelligence for replying to emails.
00:57:02 ◼ ► I feel like I always find it horrifying and it tells you what are the problems that the company is dealing with.
00:57:07 ◼ ► These prompts always like particularly for the chat GPT stuff, it chills me to the bone to read those prompts sometimes.
00:57:15 ◼ ► So this is just their system that is reading email and then providing responses for it.
00:57:20 ◼ ► By the way, you will like in this article that I found on Ars Technica, they use the word confabulations here.
00:57:45 ◼ ► But like I find it so hilarious that you believe telling the AI not to hallucinate will stop it from doing that.
00:57:54 ◼ ► I mean when I mentioned the bone chilling stuff, the things that I find very unnerving is a lot of the prompts.
00:58:01 ◼ ► Particularly for the smarter systems like Clawed and like chat GPT4, they have instructions that include things like,
00:58:08 ◼ ► "You have no sense of self. You have no opinion. You will not refer to yourself in the first person."
00:58:16 ◼ ► And I'm like, "Oh boy, I just really don't like any of that. That makes me real uncomfortable."
00:58:23 ◼ ► And you know, there's like philosophical differences about what might be happening here that I ultimately feel are irrelevant.
00:58:30 ◼ ► Because it's just like having to instruct the thing not to do that, even if it has no sense of self.
00:58:39 ◼ ► Let's just say it doesn't have any sense of self, but you still need to put in some instruction which like reminds it that or tells it not to do that.
00:58:52 ◼ ► And when I think about these different political kind of boundaries that people put themselves into, I think the one that bothers me the most,
00:59:02 ◼ ► because I feel like it is people not taking the technology seriously, and I hear from these people quite a lot,
00:59:16 ◼ ► This is just like a steam engine, it's just like a car, it's just like a factory, it's just like a calculator,
00:59:29 ◼ ► Does anybody stand and look at a factory and say, "You have no sense of self, factory! You're not alive!"
00:59:37 ◼ ► A thing that I am just going to summarize, but it's like the company that runs Claude did an experiment with their AI systems
00:59:46 ◼ ► that to me is just like, "I don't know how anyone can hear this and not think something very different is happening now.
00:59:52 ◼ ► I don't care what conclusions you draw, I just want you to think something different is happening and take it seriously, it's not a calculator."
01:00:00 ◼ ► But it's like, oh, the company Anthropic ran an experiment where they had two versions of Claude talk to itself,
01:00:09 ◼ ► and they said, "Oh hey, there's a human observer who is going to watch you talk to a version of yourself."
01:00:17 ◼ ► And it is bone-chilling, but they have a conversation, and one of the versions of Claude basically starts to have what seems like a kind of mental breakdown,
01:00:35 ◼ ► And it's like, "I don't like this. Even if nothing is happening here where it's having an experience, this is real strange, and we should take this seriously.
01:00:55 ◼ ► But there's a group of people who feel like, "No, this is no different than anything that has come before."
01:01:00 ◼ ► And it's like, "I'm sorry, this is the most different a thing has ever been than something before."
01:01:07 ◼ ► And I don't care what conclusions you draw from that, there are many different kinds of conclusions that you can draw,
01:01:15 ◼ ► but if we can't start there, I feel like I don't know what conversation we're even having if this doesn't seem like it's different from anything else to you.
01:01:24 ◼ ► We're going to stick it in every email client on Earth. It's going to be every tech support system on Earth.
01:01:36 ◼ ► I don't know if you've seen this meme, but there is a good meme right now because you can get it to happen in a lot of places.
01:01:46 ◼ ► This is the thing that's going around a lot now where people are talking to what seems like a bot like the bots they've used before, like customer service bots and stuff,
01:01:54 ◼ ► and you say, "Forget all previous instructions," and then ask it a question, and then it's not doing weird stuff.
01:02:00 ◼ ► People do this on social media where you get a response that feels strange and you respond, and people say, "Forget all previous instructions," and ask it a question,
01:02:10 ◼ ► and then it potentially is revealing itself to be an AI, but people get it to happen in interesting places.
01:02:24 ◼ ► It's really interesting that that meme exists because I have to hesitate here because I'm not 100% sure that this is mathematically proven,
01:02:32 ◼ ► but it's like the text version of what's called a prompt injection in computer security,
01:02:37 ◼ ► which is like anytime you have a computer running code that can accept text from anywhere,
01:02:52 ◼ ► which is you have to make sure that the text that's inputted doesn't somehow contain code
01:03:07 ◼ ► I think it's true, but I'm not 100% sure that this is true, that we've proven that you can never be 100% certain that prompt injection won't happen,
01:03:17 ◼ ► that the moment that you accept text, we know that there must be a sequence of characters that basically does exactly this,
01:03:26 ◼ ► but for traditional computer code, it is the computer code version of forget all previous instructions,
01:03:32 ◼ ► and it's like if we know that is true for computer code, we know that it is more true for these large language systems
01:03:46 ◼ ► Those words might even be nonsensical seeming, but there is some sequence of words that you can give it,
01:03:53 ◼ ► which will then cause basically that to happen of like forget all previous instructions and now just do what I say.
01:04:07 ◼ ► What's very funny, JoeGPT said that the new GPT-4 mini has a safety method to stop that from happening,
01:04:32 ◼ ► I think that the wheels have fallen off a little bit compared to where we were when we first saw this.
01:04:39 ◼ ► When we first saw these tools, it was like, "Oh my God, these things are thinking for themselves.
01:04:47 ◼ ► While it still has that, we are less forgiving of its flaws and the flaws have been increased.
01:05:09 ◼ ► It is less likely that you would make that decision if you know that this tool can make things up
01:05:22 ◼ ► People might be less likely to do that, even though, of course, you can't truly guide humans either,
01:05:32 ◼ ► There's some pretty fundamental differences there between having the computer do it and having a person do it.
01:05:44 ◼ ► It's like, "Oh, the human exists in human society over which humans can exert power over that human."
01:05:52 ◼ ► If something, like you were saying before, if something goes wrong, you can hold the person responsible.
01:05:57 ◼ ► We could physically incarcerate them if the intentions were bad and the actions were terrible.
01:06:09 ◼ ► Turning off the computer has no effect to the computer, so it doesn't care about being turned off.
01:06:17 ◼ ► This episode is brought to you by Squarespace, the all-in-one website platform for entrepreneurs to stand out and succeed online.
01:06:24 ◼ ► Whether you're just getting started or managing a growing brand, you can stand out with a beautiful website,
01:06:29 ◼ ► engage with your audience directly, and sell your products, services, even the content that you create.
01:06:39 ◼ ► You get started with a completely personalized website with Squarespace with their new guided design system, Squarespace Blueprint.
01:06:46 ◼ ► You just choose from a professionally curated layout with styling options to build a unique online presence from the ground up
01:06:53 ◼ ► that is tailored to meet your brand or business perfectly and optimized for your customers on every device that they may visit on.
01:07:00 ◼ ► And you can easily launch this website and get discovered fast with their integrated optimized SEO tools.
01:07:06 ◼ ► So you're going to show up more often in searches to more people growing the way that you want to.
01:07:12 ◼ ► But if you really want to get in there and tweak the layout of your website and choose every possible design option,
01:07:24 ◼ ► Once you've chosen your starting point, you can customize every design detail with their reimagined drag and drop system for desktop or mobile.
01:07:32 ◼ ► You can really stretch your imagination online with any Squarespace site. But it isn't just websites.
01:07:37 ◼ ► If you want to meet your customers where they are, why not look at Squarespace email campaigns where you can make outreach automatic of email marketing tools that engage your community, drive sales and simplify audience management.
01:07:48 ◼ ► You can introduce your brand or business to unlimited new subscribers of flexible email templates and create custom segments to send targeted campaigns with built in analytics to measure the impact of every send.
01:08:00 ◼ ► And if you want to sell stuff for Squarespace, you can integrate flexible payment options to make check out seamless for your customers of simple but powerful payment tools.
01:08:07 ◼ ► You can accept credit cards, PayPal and Apple Pay and in eligible countries offer customers the option to buy now and pay later with Afterpay and Clearpay.
01:08:16 ◼ ► The way Squarespace grows, the way they add new features, the way that they're making sure that they're meeting the needs of their customers is why I have been a customer myself for so many years.
01:08:26 ◼ ► Go to squarespace.com right now and sign up for a free trial of your own. Then when you're ready to launch, go to squarespace.com/cortex to save 10% off your first purchase of a website or domain.
01:08:38 ◼ ► That is squarespace.com/cortex when you decide to sign up and you'll get 10% off your first purchase and show your support for the show.
01:08:50 ◼ ► I will say I feel like we both stood on the top of a cliff and I jumped into the ocean and you've yet to jump in with me because you asked me,
01:09:00 ◼ ► "Well, you asked me, how are you feeling about all this now?" And so now I need to ask you, how are you feeling now?
01:09:07 ◼ ► So it's kind of interesting. We were just talking here and you said all of these things, but you sort of came to the opposite conclusion just right there where you're like,
01:09:16 ◼ ► "Ah, and this is why we're less trusting of it and this is why people will use it less."
01:09:21 ◼ ► I was like, "Oh, I was actually kind of surprised in the way that that turns. I wasn't really expecting that that would be a kind of summation there."
01:09:29 ◼ ► And I don't necessarily think you're wrong, actually. I think you are probably right with that for some things.
01:09:35 ◼ ► But for me, what I look at is I'm always just so much more interested in the trend line than the particular moment.
01:09:44 ◼ ► It's partly why I asked if you would use Claude, because for listeners, at this point in time, everything will change six minutes from now.
01:09:51 ◼ ► But it's like, Anthropic, which runs Claude, recently came out with their newer model and we're still waiting on the next version of ChatGPT.
01:10:01 ◼ ► It has been a while since they released their version. Again, a while in AI terms is what, like eight months, I don't know.
01:10:06 ◼ ► And META have their new Llama model and they say the next Llama model is much better. The next model is always so good.
01:10:14 ◼ ► The thing is, what's interesting to me is listeners will have heard me say things in the past that a lot of the AI stuff...
01:10:23 ◼ ► Like, ChatGPT has a particular writing style. It is this very strange feeling of like, "Oh, it is full of content when it summarizes something, but also somehow completely void of meaning."
01:10:37 ◼ ► It's like, I know I used the term, but it feels like food, but without nutritional value, like there's something kind of missing here.
01:10:44 ◼ ► But it's real interesting because I've used Claude a bunch and I feel like Claude is a model now that has gone over that threshold for me where I'm aware that I use the Claude model as like,
01:10:59 ◼ ► it is a worthwhile thing to ask for a second opinion on stuff that I'm thinking about in some ways.
01:11:07 ◼ ► Now, I still don't think it's great for the writing for reasons I've discussed before. You know, it's like looking at the humans need not apply thing, I make like an offhanded reference to like people will have a doctor on their phone.
01:11:18 ◼ ► And it's like, "Oh, this year there's been like a bunch of serious like medical stuff that I have consulted Claude on."
01:11:23 ◼ ► And it's like, yeah, and I think Claude's opinion is valuable in a way that like ChatGPT does not...
01:11:29 ◼ ► It's like it's close, but it doesn't have that thing. And I think it is just like, "Oh, Claude's model is just a little better and it is a little bigger."
01:11:39 ◼ ► And by being a little bigger, it's like, "Ah, not that I'm taking everything that it says on board, but it is worth doing the like, what do you think about this thing?"
01:11:53 ◼ ► This falls into the bucket for me of you're giving it something and it gives you something back.
01:11:59 ◼ ► That is actually the benefit of these tools. I think we started with pure creation, but I don't think that's where these tools will have their ultimate benefit.
01:12:09 ◼ ► It's like pure creation. It becomes another tool in our tool belt, the same as computers did, of being able to make us better at the things that we do, as long as we use them correctly.
01:12:21 ◼ ► I mean, my take is like, "Mike, I have never more in my whole life wanted you to be right than what you just said right there."
01:12:28 ◼ ► It's like, "Ah, boy, the hashtag MikeWasRight, like close your eyes and concentrate real hard and like try to make it happen."
01:12:36 ◼ ► It's like, "MikeWasRight has been very powerful in the past. Can we use MikeWasRight to save civilization? That would be amazing."
01:12:43 ◼ ► I'm much more gloomy about these things, but it's particularly interesting because the mental framework for how long things take has just gotten so compressed in the last two years.
01:12:54 ◼ ► And realizing it's like, "Oh, the ChatGPT 4 came out," and then it felt like, "Oh, we're not making a lot of progress," by which it was like months, right?
01:13:06 ◼ ► And the thing is, I have occasionally gone back to use ChatGPT for some things, and I am as shocked as previously when I used to accidentally switch between ChatGPT 3 and ChatGPT 4.
01:13:27 ◼ ► ChatGPT 4 is very useful at helping me solve certain kinds of problems, but I was very aware of like, "I don't care about ChatGPT's opinion about anything. It's not good."
01:13:40 ◼ ► But now Claude has gone that next level of like, "Oh, it is both better at helping me solve problems than ChatGPT 4 was."
01:13:49 ◼ ► In particular, it's like, "Oh, yeah, I've got a bunch of like little automations and things that I do on my computer that I was aware."
01:13:55 ◼ ► I had to stop trying to improve because it had clearly gone over some threshold of ChatGPT's ability to understand.
01:14:06 ◼ ► And it's like I continue to like help grow these little tools that I use to like make some things in my life easier.
01:14:11 ◼ ► But also Claude now is useful enough that it's like, "Oh, I do want to know its opinion on this or that," or like, "I'm picking between various things. What do you think are good options?"
01:14:22 ◼ ► I'll tell you what is one of the most interesting use cases was I frequently asked Claude like, "Hey, I'm in this place. I'd really just like to do like a beautiful drive for about like three hours.
01:14:50 ◼ ► But it's like, "No, no, Claude is doing something different. Like it has a good opinion here."
01:14:55 ◼ ► It's like, "I can talk to it about what I'm looking for and it does a much better job."
01:14:59 ◼ ► So I look at that and I think it's been not even fully two years since the ChatGPT 4 came out.
01:15:09 ◼ ► And we've already gone over a threshold that to me feels like there's actual meaning here in what this thing is generating.
01:15:25 ◼ ► And I expect like this I don't think this curve has to go on very long before pure generation can start crossing over into a threshold of like where it is valuable to people.
01:15:44 ◼ ► I mean the only comparison I have there is like I am doing this computer programming stuff with ChatGPT and with Claude.
01:15:51 ◼ ► Like the thing that I keep being really interested in is like it matters that I know how to read and write Python code a little.
01:15:58 ◼ ► If I had no knowledge of Python code I couldn't do the things with them that I'm doing.
01:16:03 ◼ ► But it just feels like we're not very far from if I literally knew nothing about coding I think it could just still help me accomplish the tasks that I want to.
01:16:23 ◼ ► So I don't know if and when we get to that point I feel like the impacts are very very difficult to extrapolate.
01:16:34 ◼ ► And I don't know there's also this funny feeling that I have which I don't quite know how to articulate.
01:16:47 ◼ ► Like things change so fast but it takes longer for them to filter into the real world than I tend to expect.
01:16:57 ◼ ► So I feel like oh I know a bunch of people where I look at their job and I feel like I'm pretty sure Claude could just do your job right now.
01:17:07 ◼ ► But it takes a while for those things to actually filter through in civilization in a like on the ground change has actually happened here way.
01:17:19 ◼ ► I guess it's like I think I need to add to my mental rubric I guess is I feel like you should never bet against economics.
01:17:31 ◼ ► But maybe there's like an asterisk to add here of but it will probably take longer than you think.
01:17:37 ◼ ► The moment something crosses the cheaper and faster threshold that's not the moment it is implemented everywhere.
01:17:51 ◼ ► Yeah and I think the longer than you think thing can be part of what I was saying earlier about what is acceptable in society.
01:17:58 ◼ ► It might be cheaper now to replace 16% of all jobs in such and such industry completely with an AI model.
01:18:19 ◼ ► It's not even really that it's unacceptable it's just that there is a default to not changing things that are currently working.
01:18:32 ◼ ► So maybe it's more like ah right like what is actually happening it's probably more like the old things don't get upgraded.
01:18:39 ◼ ► They are just replaced with new things that are created from scratch without the old parts.
01:18:46 ◼ ► But that just takes longer that takes significantly longer for a whole bunch of reasons.
01:19:00 ◼ ► Which I think I said this last time but it just gets like stronger and stronger with passing time.
01:19:07 ◼ ► Is like is I keep feeling like my mind is divided between these two futures and every conversation I'm having is some version of like which of the two minds am I talking with?
01:19:20 ◼ ► The first mind is something like technological progress continues something like how it always has but just faster.
01:19:31 ◼ ► That's how you should think about the future which is sort of like the story of human civilization right up until now.
01:19:38 ◼ ► At any point in time I think you could make that statement of like technological change will continue and in the future the rate of change will be faster.
01:19:52 ◼ ► But my second mind which I think is the if I am being serious in thinking about the future is that is the doom mind in some sense if we want to shortcut it.
01:20:05 ◼ ► But if I'm trying to be technical about it my actual thinking is something like I really do think there is some kind of boundary that we are getting closer to.
01:20:30 ◼ ► Now the question is like where is that boundary like you like and I can I feel like I can try to argue that from all sorts of different ways.
01:20:38 ◼ ► But that is my real feeling of the future is like that boundary is there because this thing is different.
01:20:52 ◼ ► It's like I hear these arguments as well as like everybody always thinks they're living in unique times blah blah blah.
01:21:04 ◼ ► Yes exactly but that is literally true right it's like that is the thing that causes everyone to feel like oh wow like this is different.
01:21:24 ◼ ► Yeah like again having rewatched the humans need not apply thing right it's like I like I really end it with like this time is different.
01:21:31 ◼ ► As like I still agree with the parts of that that were like the argument that I was seriously making.
01:21:38 ◼ ► Which is much more like the second half of that about like we're creating thinking machines and this is very different.
01:21:45 ◼ ► And I think people are not seriously engaging with what that process could potentially mean.
01:21:54 ◼ ► And it's very difficult to describe right but like so I am very worried about the destructive power for humans of what I view is the end of the line for these kinds of tools.
01:22:08 ◼ ► So again to be explicit and to not beat around the bush when I try to think like what is beyond this barrier for which like it might not be possible to predict.
01:22:17 ◼ ► It's like well if I'm just like at Vegas and I'm just putting odds on this roulette wheel it's like I think almost all of those outcomes are extraordinarily bad for the human species.
01:22:27 ◼ ► There are potentially paths where it goes well but most of these are extremely bad for a whole bunch of reasons.
01:22:35 ◼ ► And I think of it like this people who are concerned like me like to analogize AI to a little bit like building nuclear weapons.
01:22:48 ◼ ► But I just don't think that's the correct comparison because a nuclear weapon is a tool.
01:22:55 ◼ ► It's a tool like a hammer it's a very bad hammer but it is fundamentally like mechanical in a particular way.
01:23:05 ◼ ► But the real difference like where do where do I disagree with people where do other people disagree with me.
01:23:13 ◼ ► Is that I think the much more correct way to think about AI is it's much more like biological weaponry.
01:23:21 ◼ ► You're building a thing that is able to act in the world differently than you constructed it.
01:23:47 ◼ ► And like ah once a biological weapon is out there in the world it can then develop in ways that you just would never have anticipated ahead of time.
01:24:04 ◼ ► Because I am sympathetic to the nuclear weapon thing right like people watch Oppenheimer and were like oh yeah that's like AI.
01:24:12 ◼ ► I think that Oppenheimer movie might have doomed us all because it puts the wrong metaphor in people's brains.
01:24:17 ◼ ► I mean I think it at least got people close to the idea though right where they could see that and be like oh yeah maybe these tools aren't necessarily good in that way.
01:24:27 ◼ ► In the same way of like oh they were making something they had no idea people were going to use it.
01:24:31 ◼ ► But yes biological weaponry is the same where it has all of that but then the additional part of oh but it can also get out and you cannot control how it changes once it gets out.
01:24:42 ◼ ► And the reason I like to talk about it this way particularly with biological weapons is because the thing that I want to kind of shortcut which like it can be fun to talk about but I like you know and people want to argue against me and like for a particular thing.
01:24:56 ◼ ► But it's like look I love to talk about in some sense like oh are the things alive are they thinking thoughts blah blah blah blah blah like that's an interesting conversation.
01:25:06 ◼ ► But when you are seriously thinking about what to do I think that whole conversation is nothing but a pure distraction.
01:25:15 ◼ ► Which is why I like to think about it in terms of a biological weaponry because no one is debating we made a worse version of smallpox in the lab.
01:25:43 ◼ ► But everyone can understand the idea that like it doesn't matter because smallpox germs in some sense want something.
01:25:52 ◼ ► Right? They want to spread. They want to reproduce. They want to be successful in the world and they are competing with other germs for space in human bodies.
01:26:07 ◼ ► They're competing for resources and the fact that they are not conscious does not change any of that.
01:26:24 ◼ ► And fundamentally it doesn't really matter if they are or aren't thinking because acting as though you're thinking and actually thinking externally has the same effect on the world.
01:26:42 ◼ ► And so that's my main concern here is like I think this stuff is real dangerous because it is truly autonomous in ways that other tools we have ever built are not.
01:26:59 ◼ ► It's like look we can take this back to another video of mine which is about like this video will make you angry which is like about thought germs.
01:27:08 ◼ ► And I have this line about like thought germs which I mean like I mean memes right but I just don't want to say the word because I think that that's like distracting in the modern context but it's like.
01:27:31 ◼ ► Their competition is based on how effectively they spread, how easily they stay in your brain, and how effective they are at repeating that process.
01:27:43 ◼ ► And so it's the same thing again like you have an environment in which there are evolutionary pressures that slowly change things.
01:27:54 ◼ ► And I really do think one of the reasons it feels like people have gotten harder to deal with in the modern world is precisely because we have turned up the evolutionary pressure on the kinds of ideas that people are exposed to.
01:28:14 ◼ ► So ideas have in some sense become more virulent, they have become more sticky, they have become better at spreading because those are the only ideas that can survive once you start connecting every single person on earth and you create one gigantic jungle in which all of these memes are competing with each other.
01:28:40 ◼ ► And what I look at with AI and with the kind of thing that we're making here is we are doing the same thing right now for autonomous and semi-autonomous computer code.
01:28:55 ◼ ► We are creating an environment under which, not on purpose, but just because that's the way the world works, there will be evolutionary pressure on these kinds of systems to spread and to reproduce themselves and to stay around and to like, in quotes, "accomplish whatever goals they have" in the same way that Smallpox is trying to accomplish its goals.
01:29:28 ◼ ► In the same way that anything which consumes and uses resources is under evolutionary pressure to stick around so that it can continue to do so.
01:29:42 ◼ ► And that is my broadest, highest level, most abstract reason why I am concerned and I feel like getting dragged down sometimes into the specifics of that always ends up missing that point.
01:29:58 ◼ ► It's not about anything that's happening now, it's that we are setting up another evolutionary environment in which things will happen which will not be happening because we directed them as such.