PodSearch

Cortex

158: Is AI Still Doom? (Humans Need Not Apply – 10 Years Later)

 

00:00:00   We have been threatening for many months to talk about AI again. It's a thing that's been on our list. It's an area we wanted to return to.

00:00:08   And then, I know, a little while ago you said to me, "Hey, do you know that it's going to be 10 years since Humans Need Not Apply was published, coming in August?"

00:00:19   And then it was like, "Well, that's when we'll return to it then I guess, because can't miss that."

00:00:23   I feel like I sealed my own fate with this. We've been threatening to revisit AI, but it feels like, who have we been threatening? Not the audience, but ourselves.

00:00:32   I feel like, yeah, it's like, we'll talk about Humans Need Not Apply 10 years later and all of the rest of it, but I do have to say it's like, boy, this is a topic like no other topic.

00:00:42   It makes me feel kind of like, ill and overwhelmed to talk about. It's just like, oh god, it is all of the everything for all of the future. How do you even begin?

00:00:56   Let's begin by talking about Humans Need Not Apply. So this was a video that you made 10 years ago now.

00:01:03   What was this video to you? What drew you to make this video? Because it was a very different landscape a decade ago to where we are now.

00:01:12   It's interesting. I rewatched it this morning in anticipation of the show and god, it's like, I don't know how long it has been since I've seen it.

00:01:20   Maybe like seven years? I have no idea. It's been a long time because I don't tend to watch the older stuff.

00:01:27   But when I do rewatch the older videos, it does often put me in the place where I was when I was making it.

00:01:35   It's like I'm having PTSD for memories of picking the stock footage. It's like, oh yes, I remember that clip wasn't long enough and that's why I had to reverse it halfway.

00:01:45   I wonder how many people will notice. Spoiler, no one ever notices. Nobody ever cares.

00:01:50   Yeah, it's surprising how much it can take me back, but I think it's because I sort of make these things under such an intense situation and such an intense focus.

00:01:59   But my main motivation for making it at the time was just, it's sort of like when we first talked about AI on this show.

00:02:09   We talked about it when we did because I had this feeling of like, oh, I could like see these things that are around and I just don't feel like people are talking about them fully or like as aware.

00:02:19   And like at the time I made that video, I felt like just this kind of like concept of maybe the automation this time is a different thing was not so much in the public consciousness.

00:02:36   I felt like 10,000 different kinds of conversations have happened about self-driving cars since this point in time.

00:02:43   There have been highs, there have been lows, but I just felt like, oh, I don't think this is being discussed as much as it should be.

00:02:51   And so, yeah, I felt like this is a really big important topic for the future that I felt sort of grim about in some ways.

00:03:01   And that was like a big motivator for why I was working on it.

00:03:06   I don't often feel this way when I'm making videos, but I feel like this is one of the rare ones where it's like I'm making this for part of the public conversation.

00:03:17   Whereas like normally I'm making a video because it's more like I'm interested in the thing and I want to talk about it.

00:03:24   But this one did feel like it was to be part of the public conversation around this topic was much more of the motivation at the time.

00:03:32   Is that why the presentation style was different?

00:03:36   Yes and no. I think using this doc footage and having it be real people is like, yes, I think that's more accessible at the time to a wider audience.

00:03:45   Like I don't think that would really matter now, but 10 years ago I think it did matter a little bit.

00:03:49   But honestly, the main decision was I can't animate a thing that's going to be a long video.

00:03:56   I knew it was like, oh, this is going to be 15 minutes, which at the time was like an insanely long video.

00:04:03   And it was just me doing everything at that point in time.

00:04:06   And I thought, oh, if I also have to animate this in like the normal stick figure way, it will take absolutely forever.

00:04:12   And so I thought like, well, the stock footage, I think it works for a broad audience.

00:04:17   It makes this job significantly easier.

00:04:19   And also, I think it just aligns with the topic better because I want to be able to show a bunch of things.

00:04:26   And then this way I'm not like switching back and forth between animation and the stock footage.

00:04:31   It's like almost entirely stock footage all the way with like a couple of cuts to me at the desk.

00:04:36   So the main decision was really practical, not artistic for that choice.

00:04:41   It's interesting. I've watched this video a couple of times now to prepare for today.

00:04:45   And just watching it, I'm trying to like imagine how you would make it now, how different it would be.

00:04:50   Like the one thing that I came down to is that I would assume you'd probably animate.

00:04:54   I wasn't sure why you had made stock footage, but what you gave is one of the reasons I thought.

00:05:00   I also just thought that maybe you were just going for a different vibe and like maybe you hadn't even then like found your full vibe.

00:05:07   Right. Like maybe that stock footage could have been a vibe for you.

00:05:09   Right.

00:05:10   That was also a time where it's like I was more uncertain about things.

00:05:13   And I can like, I can feel that uncertainty in a couple of spots in the video.

00:05:17   It's like, I didn't quite know what to do here.

00:05:19   I knew that this audio part here wasn't great, but I wasn't entirely sure how to do it better.

00:05:24   Like, yeah, there's just a lot of that.

00:05:26   I'm like, ah, yes, this is still like earlier in the career and it totally shows. It totally shows.

00:05:30   I can hear some cuts, which I know is a thing that we've spoken about before that like the ADR thing, which I hear, you hear, but most people have no idea.

00:05:38   But like I can hear a couple.

00:05:40   It's interesting to make things and publish them online.

00:05:42   And one of the ways it is interesting is you are met with your skill difference in a way that people aren't usually in their work.

00:05:52   But like we can go back to you from 10 years ago. We can go back to me from 10 years ago and you can hear the differences in our ability.

00:06:00   While this video is still is really good, like it is really good.

00:06:04   It's popular for a reason. Just your presentation just is not as good as it is now.

00:06:09   That's actually the main thing that I'm aware of is like that past Grey doesn't fully yet know how to use his voice in this medium.

00:06:17   You sound unsung.

00:06:18   Yeah. And you can kind of see it here.

00:06:20   Like a lot of the earlier videos to me very strongly, like they're still being made with kind of the idea that this is a public presentation.

00:06:30   Like I'm on a stage at these are slides.

00:06:33   I'm not actually sure if I've ever said this before, but for years I was always kind of wondering if there was a direction where this really would be a career that would become like a stage show in some way.

00:06:45   I was legit wondering like is there a version of this where I'm doing like a live presentation.

00:06:51   And so lots of the videos were still kind of framed with this idea of like actually giving it in front of an audience.

00:06:58   Now it's real funny like I use that concept now all the time in the videos of like we have like the theater in which it's taking place in front of.

00:07:05   But what has happened is like but now I know that that isn't what it is.

00:07:09   It's video first and that does help like change the way that I talk about things.

00:07:16   It's like this is never really going to be a presentation and if it's not you can do it in different ways.

00:07:21   But yeah totally there's a number of lines there.

00:07:24   I was like oh past gray like yeah uncertain is a good word for it or it's like tentative in certain spots.

00:07:31   It's just real interesting to be faced with that past version of yourself.

00:07:35   I think particularly in the context of our conversation last time about being better is like ah yes yes I could really see the difference between now and then.

00:07:45   And especially given my life now it is quite comical that given what I am currently doing.

00:07:52   Looking at that video I was like this video is so long and it's like ah past gray.

00:07:59   You have no idea 10 years from now what that current gray will be doing that he thinks is like a long video versus what you think is a long video.

00:08:09   So it's just it's interesting to see those changes.

00:08:12   I actually really like the way that you said it you feel like you didn't know how to use your voice.

00:08:15   You're not emphasizing words and phrases in the same way you don't have the same sense of like presence.

00:08:22   But the thing was is like what we can't do when we watch a video like this is actually take ourselves 10 years back in time as a viewer.

00:08:29   Because you were obviously at that point you stood out right.

00:08:34   That's what I was trying to say also with like the decisions about the stock footage of what like it was a very different landscape at the time.

00:08:41   Like everything on YouTube every piece of media that's produced for anything.

00:08:45   It doesn't exist apart from the context in which it was created.

00:08:51   It's like I actually for the most part I was pretty surprised at how well I think the video holds up.

00:08:57   I feel like there's this clear line of like I think it basically gets better the longer it goes on.

00:09:02   Like the shakier parts or the earlier parts but I was like oh this does hold up pretty well.

00:09:06   I'm not 100% sure but I do feel like this was the most maybe it wasn't the most popular video on the channel.

00:09:13   But I feel like it was like the second most popular for a really long time and I'm not surprised why.

00:09:18   I think it was the most popular until the traffic point.

00:09:21   If my memory serves.

00:09:22   Yeah maybe that's what it was.

00:09:24   It was number one until the simple solution to traffic.

00:09:28   Is that the name of it?

00:09:29   Yeah.

00:09:29   That was it for a long time.

00:09:31   And I don't know that this was the first thing that I saw of yours but I know it was definitely among the first that I remember of like being you.

00:09:42   Because I know that I had seen the UK Explained video.

00:09:46   I look at that video and I think oh that video was successful in what I wanted it to do.

00:09:51   Because I'm currently in the position it's like oh I get to go to conferences and I get to meet interesting people.

00:09:57   And a comment that is made surprisingly often is people will reference that video.

00:10:04   And the thing that they tell me is they say like that's the first time I came across the concept of technological unemployment.

00:10:14   Or like oh that's the first time I really thought about what does it mean if this occurs.

00:10:21   And to me it was like ah great like the thing did the thing that I kind of wanted it to do is try to like reach people with this idea is out here you might not have heard it before.

00:10:31   Here's a kind of relatively condensed way to have this idea.

00:10:37   So it's like ah yeah yeah it is totally been successful to me and it's just like interesting over the years that you know there's a very small handful of videos that people will reference when they meet me.

00:10:46   It's like ah this is the one that I really like or this is the one but like the humans need not apply one.

00:10:51   It's always the same thing someone's like ah that's the first time I ever thought about this idea seriously or it's the first time I ever came across that idea.

00:10:59   So I feel like oh great yeah yeah is it the best video for the topic now?

00:11:03   Definitely not but I think it probably was one of the best videos for the topic at the time.

00:11:09   And I feel like I've seen people reference that you know going forward in my life.

00:11:13   I think what it was is just no one was thinking about it.

00:11:16   You know like not only was it good it was also the first time that a lot of people including me was faced with confronting the idea of what you call software bots professional bots and creative bots.

00:11:29   Which today we just call AI.

00:11:31   Oh my gosh I just had a memory.

00:11:33   What's your memory?

00:11:34   I remember now where I was when I watched this video for the first time.

00:11:37   I was in my old bedroom which at that point because of like where I was like I had converted this room like I was kind of getting it ready for podcasting.

00:11:48   You know we were starting things at Relay like then and so I was kind of like rearranging things.

00:11:54   And I remember I think I was talking to Steven about this video and I remember saying that like it's really good but I know that it will never take my job.

00:12:09   I remember having that distinct feeling that it will never take my job.

00:12:14   They'll never take my job.

00:12:15   So that's interesting because you've just hit on the thing that I wanted to say which is there's a fundamental problem in making videos in particular and videos that are talking about a topic that I just think there really isn't any way to solve without becoming very very boring in the making of the video itself.

00:12:33   But I'm always really aware that there is no way to communicate different levels of seriousness or different levels of confidence easily in an explanation for the viewer without being tediously self-referential all the time which is just very hard to listen to.

00:12:55   And the thing watching that video is again I think the beginning parts are the worst parts but it's also because I remember structuring it such that the things that I'm leading in the beginning they're there because they're the physical things that you can look at.

00:13:11   But the part that was really important to me and it's why I like the video more as it goes on and I think you can see the argument starting to build is like you person watching this who like works on a computer this is where the real problem is coming later.

00:13:30   It's like we're talking about the physical things but that's the part that I felt was really under talked is like the creative class and the intellectual class of workers had always viewed themselves as apart from these sort of things.

00:13:47   And it's like no no no this is coming and that's why I can look at this video and feel like I'm pretty pleased with that.

00:14:03   I feel like for me when I was writing it the important part was building to that second half of like there's a huge number of people who think that this will not apply to them and I am telling you now that it is coming.

00:14:22   So it is a real delight to me to know that like you watch that video and you get to the end and you're like oh well not me though. You are the person I'm talking to when I say the line something like maybe you think you're a special creative snowflake and there you are Mike you're going and I am.

00:14:39   But I remember the feeling though I remember the feeling which was denial like I remember the feeling where it was very much a I need to tell myself this.

00:14:55   I think over the last 10 years the most sustained thing that I have seen when reference to this video has always been about the self driving cars of it all.

00:15:09   Oh yeah like over the course of the 10 years now obviously the last 2 of those 10 years the crux of the video I think has actually come to bear right with what we now call AI but it's large language models but autos has been the thing which is what you refer to the self driving cars in the video.

00:15:28   You try to brand it which I still like the branding but I think it didn't work partly yes it's like I would have been quite charmed if the word autos had taken off but like it wasn't going to happen but it was also trying to solve a thing in the video which is like.

00:15:42   I only do it just a little bit but it's like you need to think about self driving vehicles of all kinds like the one part I was like oh that totally has come to pass is like the automated warehouses and it's like yeah yeah.

00:15:56   Those are teeny tiny autos in the way that I mean it it's like a little self driving thing it's like that's the other thing that I was trying to do there but the self driving car stuff is like what a lot of people think about that video as primarily being about.

00:16:09   And it's like that is where the totally fair criticism comes and like oh my timeline for the self driving cars 10 years ago was significantly shorter than it has turned out to be and I had just the funniest coincidence this year because it's like what's my timeline I was like 10 years from now.

00:16:31   I expect like I'll just be able to like order a self driving car and get it in and go and like whatever like they'll just be common like taxis in some sense and the funny thing is like this year I was out.

00:16:44   I was out in the desert for many months working on things and one of the places that I happen to spend a huge amount of time was Phoenix.

00:16:53   And Phoenix has that Waymo project with the self driving cars but the thing that was really interesting that like I caught in myself was like I found them almost so unremarkable in a way that I was so busy with other things while I was there.

00:17:11   I didn't even take the time to try one out but I'll tell you driving around Phoenix they're all over the place and if you look inside them every single one has the same thing what looks like a family of tourists filming the empty car that's driving them around Phoenix.

00:17:26   So I was looking at that I was like oh this is like a funny thing 10 years later I happen to be in a place where I could do the thing that I was kind of thinking was the benchmark but it is not the way I was thinking about the benchmark at the time.

00:17:38   I was thinking about them as being like common and everywhere and it's like oh no no no no they exist in Phoenix and they exist in San Francisco in the way that I was thinking of them.

00:17:49   And we can have a kind of like asterisk on Tesla for like sort of kind of if you're in the beta asterisk asterisk asterisk that's not what I was thinking so like that mental timeline was totally wrong and totally off.

00:18:03   I would not say I don't think it will ever happen like all cars are just self driving but my likelihood of it happening is less now than.

00:18:15   Can I ask you what your reasoning is for that what are you what are you thinking what are your reasons for that?

00:18:19   The closer we have gotten to it happening it seems like there is more and more rejection of the idea.

00:18:25   There's a line that really stuck out to me where I was like ahh past grey you're not considering something where I say something like they don't need to be perfect they just need to be better than people.

00:18:34   That stuck out to me too.

00:18:35   And it's like I was like ahh that's the wrongest thing I've said in the video like I just did not appreciate how much people demand perfection they don't care that it's better they want it to be perfect.

00:18:46   I was like oh boy buddy you didn't have any idea about that.

00:18:49   This is the issue and I think the trolley problem right all this stuff is a problem.

00:18:53   The issue is if you're taking the human decision making out of it I think en masse people want no decisions to be made they want perfection.

00:19:01   And I understand the emotional argument I understand the logical argument and I think the emotional argument is going to win every time.

00:19:09   Yeah I think the thing that I was not conceptualizing there is what I was trying to think about is how would I convince past me to take that line out of the video.

00:19:20   And I think my argumentation would be something like people are going to demand that this is incredibly safe for the same reason that airplanes have to be incredibly safe.

00:19:32   If a death is going to occur people would much prefer that it was their own fault that the death occurred versus being more safe but the death is someone else's fault and not under their control.

00:19:49   Like I think there's some kind of human feeling around there.

00:19:53   It's like that's what people don't like about being in an airplane.

00:19:56   Someone's driving me and we might all die and I will have no ability to control this and people are like much happier to be less safe but have more control.

00:20:06   Yeah it's interesting I have a different opinion on both of those things.

00:20:09   I think the reason we demand safety of airplanes is the catastrophe looks and feels and is worse right if a plane crashes.

00:20:18   That's true.

00:20:20   So many people plus planes are so big that a catastrophe can cause bigger catastrophe.

00:20:27   I feel like with the car thing people want to blame someone and you can't blame the computer.

00:20:35   If there is an accident caused by a driver we want to as humans be able to say it was that person's fault they caused this. Instinctively that's what we're looking for and it is really hard to blame the algorithm.

00:20:52   You can't personify it.

00:20:54   And also the ones and zeros of it all means there was another choice the computer could have made and it didn't make that choice.

00:21:05   We know we're more complicated than that and we know we can make other choices but we also know we can fundamentally understand that human beings are only able to make the choice they're able to make in that moment.

00:21:16   We're weirdly more deterministic sometimes when it comes to things like that. You can't see every possibility that is available to you where in theory the computer can.

00:21:27   And also there's the predeterminedness of it all that people don't like too which I understand.

00:21:32   Whether it's true or not but the idea prevails that you can code the car to make a choice and that's in its programming.

00:21:41   I just think all of these things are more complicated to the point that every time there is an accident caused by a self driving car there are articles written about it.

00:21:51   And that's what makes me think very much of plane crashes. Every time there's a plane crash there's an article that's written about it and every time there's a self driving car crash there's an article written about it.

00:22:00   It has that same feeling.

00:22:02   And it's like for me I don't even really know where I stand on it. I think self driving makes me feel uneasy. Which doesn't make any sense. I cannot tell you why.

00:22:11   Are you uneasy on airplanes?

00:22:12   I mean everybody's vulnerable on airplanes right?

00:22:14   I feel like the way you answered that really tells that. That is the answer. Everybody's vulnerable on an airplane.

00:22:19   That is true though right? That people cry on planes and stuff more?

00:22:23   I see what you mean. People just are more emotionally vulnerable on an airplane. Yeah no that is a true statement.

00:22:29   I've been reading the West Wing and I started watching it at home and rewatched it on a trip and I try and find a show that I mostly keep when I fly.

00:22:37   Sometimes just the song, the theme song for the West Wing chokes me up on a plane.

00:22:42   Yeah I know I can't think of a specific example but I too know I have felt real dumb for a big emotional reaction to nothing on an airplane. I have had that yeah.

00:22:52   I find the self driving stuff hard to think about in some ways. It feels like it's the most extreme version of the quote about technology.

00:23:01   Of like the future is already here it's just not evenly distributed. Like I really had that feeling in Phoenix where it's like it's so weird that these cars just like really don't have a driver in the front of them.

00:23:14   And they're just like driving around and it's so normal it's like I very quickly found it kind of boring and unremarkable.

00:23:22   But obviously there's like a thousand reasons why it's working in Phoenix and it's not working in other places.

00:23:28   This is now the second time I'm at my parents this year and using the car that has the self driving beta on it.

00:23:35   And I was so impressed last time and now that I'm here again the difference between a couple of months ago and now I find it absolutely shocking like how much better it even is than the previous time.

00:23:51   And when we talk about technology changing I was like digging into the details because I was like oh my god I just cannot believe how different the car is now.

00:23:59   And I was like ah yes the thing that happened which we discussed a little bit previously but it's like oh the self driving system changed and it's like all of the human written code is gone now.

00:24:10   It's entirely like a self-taught neural network driving the car.

00:24:15   And I'll tell you they have an option which is something called like drive naturally.

00:24:21   So it's not trying to be like a real stickler about the speed limits and the stop signs and everything else.

00:24:28   It's so spooky because when I was with my dad last time and I was teaching him how to use the system which he loves by the way so my dad's still just like self driving himself all over North Carolina.

00:24:39   Just for context we had spoken about this on Mortex I think like last year so like when you're remembering we spoke about this we had spoken about your experience the last time you were at your parents' Mortex.

00:24:51   We did that precisely because all of this stuff is like a real contentious topic sometimes but here's the episode where it's gonna be contentious.

00:24:59   We're doing it anyway look we know wall to wall this one's contentious so you might as well get it all in you can hide some stuff in this one.

00:25:05   Yeah if you want the contentious topics raw in the future getmortex.com.

00:25:09   But yeah like the thing that I was talking to my dad last time about was like this car is self driving it won't drive like a person but that doesn't mean that it's wrong.

00:25:21   So like it's doing all of the things it's just not going to do it the way that you would but currently it's like oh this neural network it's like what did they train it on?

00:25:31   They trained it on hundreds of thousands of hours of video of humans driving and it is like spooky is the word that I use because like I've had long experience with these systems.

00:25:43   I've always been very interested in seeing how they work and it is spooky because it really feels like a person is driving the car in a way that it never has before.

00:25:54   Like it really acts and drives the way that a person does it doesn't have any more of that like you have to think about it like a different thing but it's not wrong it's still able to do this.

00:26:06   It's like no no no now it merges it treats stop signs it treats small little streets very much like a person does and it's like of course it does because the only thing it's looked at is how people drive.

00:26:20   And so I've just been thinking about that a lot because that is in the context of many of these other things that are related to AI.

00:26:29   It's like ah everything is going to go this way.

00:26:34   All of these like systems and technologies in our lives where we have automation and like people have been explicitly programming them to do things.

00:26:43   Increasingly they're going to be systems that are just looking at human output and learning from human output and like trying to mimic that or do that better.

00:26:55   That's the thing in the humans need not apply video at the very end I talk about it like just a little and it's like ah yeah yeah it's like I've done some of that kind of stuff in college like I'd seen the earliest parts of this kind of work I do it was coming.

00:27:09   But it's like real weird to be here 10 years later and have both sides of this of like ah all the self-driving stuff all the physical stuff in the real world with physical automation.

00:27:22   That has not progressed as fast as I thought it would.

00:27:26   We went through this what I feel like was a kind of a little bit of a technological lull even on the software side of like it doesn't seem like things are panning out.

00:27:35   And then all of a sudden in the last two years the very last part of the video that I was talking about with software bots and things that teach themselves.

00:27:46   It's like oh man that is here and with the self-driving car system it's like I can really see that now feeding back into the physical stuff.

00:27:56   And obviously we have all of it with just the pure digital stuff and there's many ways in which I just don't know how to think about all of this like it's really quite overwhelming to think about so yeah.

00:28:09   But yeah that's kind of my feeling is like the physical has been much slower than I expected.

00:28:15   And the software was slower than I expected for a while but the last couple of years have been terrifyingly fast.

00:28:23   And I would not dare in this moment attempt to meaningfully project forward 10 years of progress in the same way as I did 10 years ago.

00:28:37   10 years ago I'm projecting forward by thinking what if now but more?

00:28:43   Whereas now if I try to project forward 10 years it's something much more like more soon different later and like the ability to be confident about what different means is very very low.

00:28:57   This episode of Cortex is brought to you by Fitbod.

00:29:01   If you're looking to change your fitness level it can be really hard to know where to get started.

00:29:06   That's why I want to let you know that Fitbod is an easy and affordable way to build a fitness plan that is made just for you.

00:29:13   Because everybody has their own path when it comes to personal fitness.

00:29:16   That is why Fitbod uses data to make sure they customize everything to suit you perfectly.

00:29:21   It adapts as you improve so every workout remains challenging while pushing you to make the progress you're looking for.

00:29:28   You're going to see superior results when you have a workout program that is tailored to meet you exactly.

00:29:34   It's to fit your body, it's to fit the experience you have, the environment that you're working out in and the goals that you have for yourself.

00:29:41   All of this information is stored in Fitbod in your Fitbod gym profile.

00:29:45   Which will then track your muscle recovery to make sure that you're avoiding burnout and keeping up your momentum.

00:29:51   And also by making sure that you're learning every exercise the right way you're going to be ready to go.

00:29:57   Fitbod has more than a thousand demonstration videos to help you truly understand how to perform every exercise.

00:30:03   Fitbod builds your best possible workout by combining exercise science with the information and the knowledge of their certified personal trainers.

00:30:12   Fitbod have analyzed billions of data points to make sure they're providing the best possible workout to their customers.

00:30:18   Your muscles improve when they work in concert with your entire musculoskeletal system.

00:30:23   So overworking some muscles while underworking others can negatively impact results.

00:30:28   This is why Fitbod tracks your muscle fatigue and recovery to design a well balanced workout routine.

00:30:34   You're never going to get bored because the app mixes up your workouts with new exercises, rep schemes, supersets and circuits.

00:30:42   The app is incredibly easy to use. You can stay informed with Fitbod's progress tracking charts, their weekly reports and their sharing cards.

00:30:49   This lets you keep track of your achievements and your personal bests and share them with your friends and family.

00:30:54   It also integrates fantastically with your Apple Watch and Wear OS smartwatches along with Strava, Fitbit and Apple Health.

00:31:01   Personalized training of this quality can be expensive, but Fitbod is just £12.99 a month or £79.99 a year.

00:31:08   But you can get 25% off your membership by signing up today at fitbod.me/cortex.

00:31:14   So go now and get your customized fitness plan at fitbod.me/cortex and you will get 25% off your membership.

00:31:25   Our thanks to Fitbod for their continued support of this show and Relay.

00:31:30   So we've spoken about the autos, obviously the bots, the AI is the thing that's changed.

00:31:35   So that's the thing that in the last couple of years has accelerated.

00:31:40   What's so funny to me is the last times we spoke about this in detail, this has come up a lot over the intervening two years,

00:31:48   but we did our back to back episodes, 133 and 134, recorded in September and October 2022 respectively,

00:31:56   which is incredible in context that ChatGPT had not launched.

00:32:00   Oh my god, had ChatGPT not launched and we talked about that? That's not true.

00:32:04   That is true. In one of the episodes you were telling me about a thing that you had seen that had told a joke.

00:32:10   Okay, right.

00:32:12   And in the show notes for episode 134 there is a link that says "Using GPT-3 to pathfind in random graphs."

00:32:19   Yeah, right, okay, right.

00:32:20   Like I'm sure there was a version of it out there, but we weren't able to use it. Like it came afterwards.

00:32:26   Especially the September episode and I'm pretty sure the October episode, but it was not a thing that we had access to when we recorded those.

00:32:34   Because what we were actually responding to at that point was Dali.

00:32:39   That's right, that's why the episode is called "AI Art."

00:32:43   "AI Art will make marionettes of us all before it destroys the world."

00:32:46   It was Dali and then it was followed up by Stable Diffusion and stuff.

00:32:50   I swear, Mike, I still feel exhausted by those two episodes.

00:32:54   Oh yeah, yeah, that's why we've not spoken about it in detail since. They follow me around.

00:32:59   Those two episodes are like an albatross that I carry to this day.

00:33:04   You know what, I'm really happy to hear that you feel the same way.

00:33:08   Oh god, I hate that.

00:33:10   I feel like what humans need not apply has been for you, those have been for me over the last couple of years.

00:33:17   That makes total sense.

00:33:19   The conversations about this episode just follow me around.

00:33:21   Like all over the internet people still reference it.

00:33:24   Or they have been successful episodes of the show, so the YouTube comments are still coming in about them all the time.

00:33:32   It's a thing that's just always happening.

00:33:36   And I'm gonna be honest, I like to be as prepared as I can be for the episodes that we do.

00:33:43   I could not listen to them.

00:33:45   It's funny you say that because it's the same thing.

00:33:48   It's like, I like to be prepared for these episodes.

00:33:51   I always spend a bunch of time kind of pre-thinking through what are we gonna talk about.

00:33:56   Having lists of things to point to, I want to try to have a couple of specifics on hand if I know we're gonna talk about something.

00:34:03   I want to double check what I'm thinking before we discuss it.

00:34:06   And this morning while I was getting ready for this show, I just really felt this thing like I cannot bring my mind to heal on this.

00:34:19   I cannot get my mind to focus on this in a way that I would normally prepare for the show.

00:34:26   And what I realized is that...

00:34:30   More text listeners now.

00:34:33   I've been fairly isolated from the world the past several months where I'm working on the next video project.

00:34:39   And I didn't even really realize it, but one of the things that I was doing that made a big difference was...

00:34:48   I have a bunch of places where I go to try to get an aggregation of the AI news and what has happened.

00:34:55   And I was finding months ago, the amount of news and the amount of change was so rapid and so much that I found it genuinely...

00:35:06   Depressing is not the right word, but it's some kind of combination of like...

00:35:12   Overwhelming and ominous is kind of my feeling about it.

00:35:18   And so I think I really did need to step back from that for a while.

00:35:24   And it's why when we've been thinking about the AI episode for two years now, it's always been in the back of my mind.

00:35:31   I was like, "Ah, next time we talk about AI, I'm gonna be the most prepared boy in the world. I'm gonna have all these links.

00:35:37   I'm gonna do all of these things."

00:35:38   And when time came around, I was like, "I just I kind of can't emotionally do this because it is very hard and it touches on absolutely everything."

00:35:46   And it is also the thing in my own personal and professional life that it's almost every conversation, the moment it starts touching on the future in any way,

00:35:59   is the moment it becomes a conversation about AI.

00:36:02   And it becomes a conversation about how seriously do you take what is happening.

00:36:08   And the answer to that question completely determines your future worldview.

00:36:14   And what I also find particularly dispiriting is, again, not surprising, but like so many other things, but faster,

00:36:23   I have been shocked about how this topic has divided itself into teams of people who are like rabidly in different corners.

00:36:34   And for perhaps the most important topic ever, it has very quickly become near impossible for humans to have a coherent discussion across teams about this.

00:36:46   Which is also part of the reason that I feel like I have been dreading ever bringing the topic back up again.

00:36:52   Because when we discussed it at the time for those two episodes, it was still fresh enough that lines had not quite been drawn.

00:37:03   But I feel like we are way past that point.

00:37:07   And it almost, I don't know if this is too far, I don't like to talk about this publicly very much, but it almost kind of gives me the feeling of like,

00:37:17   "Why is it that in the course of my entire career, I have essentially never discussed politics directly?"

00:37:26   And the answer is like, well because it just feels like there's no point.

00:37:29   Because the team lines have already been drawn, like there isn't a real discussion to be had here.

00:37:34   I like talking about the systems of things, but talking about the particulars, it feels like a pointless kind of conversation to have.

00:37:42   And I feel dispirited because that flavor of politics feels like it has infected AI somehow.

00:37:52   It's that same kind of thing where people are really tying up worldviews in their positions on AI.

00:38:01   And so then it is like, ah, the worldview has come first, and that determines the position on AI.

00:38:09   Well, let me tell you, I have spoken about politics.

00:38:14   This is the most political thing I've ever spoken about in the responses that I get from people.

00:38:19   Okay, you're making me feel less crazy then. Okay, interesting.

00:38:22   I've spoken about politics, I've spoken about AI, and sometimes what is so interesting to me,

00:38:29   and I know it's going to happen to this episode like it's happened every time I've been speaking about it recently,

00:38:33   because obviously Apple intelligence is a thing that exists, Apple's into AI, so I've been talking about that.

00:38:39   Which is partly why it's totally unavoidable for us now.

00:38:42   It's like, it has come to Cortex, the topic can no longer be avoided.

00:38:46   But it's just because it's in everything I'm doing now, right?

00:38:49   Because for that reason, when Apple's now put it into the platforms and Google's putting it, you can't avoid it.

00:38:54   The big tech companies are making it what their future is, no matter what happens.

00:38:59   So you can't avoid it.

00:39:01   But it is incredible to me that sometimes I will say something,

00:39:05   and I will get responses from differing camps where both people are unhappy with the thing that I said.

00:39:15   That I can say a thing and I'm making everyone equally upset.

00:39:20   It's incredible.

00:39:21   And it's not always the case, but that is the case.

00:39:23   And that's why I'm saying, it is so interesting to me the ways in which people are upset about this

00:39:31   is way more than any political stance.

00:39:36   And I think part of the reason for that is that maybe over time,

00:39:41   the things that I say and the things that I believe, there may be some people that would just never listen to the stuff that I make.

00:39:49   But with AI, people haven't necessarily drawn their lines or they're moving,

00:39:54   and the lines don't necessarily overlap with any other type of demographic.

00:39:59   And so it's jumbled people up and thrown them all over the place.

00:40:03   And so people are just trying to work through their feelings.

00:40:07   People that I hold close to me, people that I work with, their opinions have diverged massively over the last six months still.

00:40:16   It's incredibly interesting in that way.

00:40:20   And it's actually brought me back to something I wanted to mention before we move on.

00:40:24   Why didn't I listen to those shows?

00:40:26   I couldn't bring myself to do it, but then what drew me to being comfortable in not having done that,

00:40:32   and in breaking a rule for me, which is always to be the most prepared that I can ever be,

00:40:38   is it actually encapsulates the thing that I just need to tell people,

00:40:43   and I hope that it makes some people at least understand me.

00:40:48   I think it is incredibly important to remember that people can change their mind about things,

00:40:55   and that opinions can change.

00:40:58   So for me, it is not important what I said in 2022.

00:41:02   I know my opinions are different now.

00:41:04   In some ways harsher, some ways less, you know, but this is such a changing world,

00:41:12   the world of technology now because of AI,

00:41:15   that people have to be able to allow their opinions to adapt.

00:41:22   They don't have to become more open to it,

00:41:26   but they have to just understand that this is all so new and is moving so quickly.

00:41:32   You have to be able to just let your opinion change and morph with more information that comes to you,

00:41:40   and not just like draw a line and never move from that line.

00:41:44   And again, I will say this again to be completely clear,

00:41:46   I'm not saying that if you hate this, you should accept it,

00:41:49   but maybe you might hate it more.

00:41:51   Allow yourself to hate it more if that's the case.

00:41:54   But if I held my opinion from September of 2022,

00:42:00   I made my opinion before the thing that changed everything.

00:42:04   Why on earth would I do that?

00:42:07   If I made my opinion about AI before Chachiepity,

00:42:14   it's like, "Oh, I'm a T-Rex over here."

00:42:17   I'm going to live forever before the asteroid hits.

00:42:20   In that description, you've helped solidify it.

00:42:22   What am I trying to express when I say the thing about politics?

00:42:25   When I say the lines are drawn, it's not in the same way,

00:42:29   because you're right, these boundaries are all moving.

00:42:32   But the thing that you're expressing is like, "What do I feel about this?"

00:42:35   The thing that makes a topic area feel like politics is like,

00:42:40   "Ah, I think I can articulate it now."

00:42:43   The thing that makes it feel that way is that the people who get the most grief

00:42:50   are the ones who have opinions that don't fit particularly well

00:42:55   within any of the pre-existing teams.

00:42:58   That is what makes something feel like, "Oh, it has this horrible political feeling

00:43:05   that the disagreements and the arguments can only take place between these teams."

00:43:11   But what all teams agree is that the people they dislike the most

00:43:16   are the people who are not clearly on one of the teams.

00:43:20   And that is what makes a thing feel like, "Oh, it's like politics."

00:43:24   You can participate in this conversation,

00:43:26   but if you have some of those opinions and some of these opinions,

00:43:31   everyone hates you, right? Everyone's angry.

00:43:34   That's what makes it feel real depressing.

00:43:36   So with that as background, because we, for I think all the reasons

00:43:40   a listener will now understand, you and I have not discussed this topic

00:43:43   between ourselves hardly at all since those episodes,

00:43:47   I would really like to know, where are you now with this?

00:43:51   I don't have any idea really what your current thoughts about any of this AI stuff are,

00:43:59   given everything that's happened in the last, nay, two years, actually six months.

00:44:05   I don't have any idea where you're currently standing on these things.

00:44:08   So I'd love to know high level, low level, wherever you want to start,

00:44:12   like what's the vibe of Mike right now with AI?

00:44:15   So I think I will concur with something you said earlier,

00:44:18   that this is the fastest I've seen a pace of technology since the App Store,

00:44:25   but maybe ever. I feel like the App Store was huge in what it enabled and the jobs.

00:44:34   I will say jobs it created, jobs it changed, right?

00:44:38   Because currently AI is creating jobs, whether they'll stick around or not, we'll find out,

00:44:45   but there are new companies being born all over the place right now.

00:44:48   And the innovation then was fast. I think the innovation now is faster.

00:44:54   I think the thing that I will hinge that on though is the difference,

00:45:00   I think now is social media is a thing.

00:45:02   And there is more information being released about what's happening

00:45:08   as well as I think maybe there's more happening, but I think it adds to all of it.

00:45:12   There are more quick think pieces that are being published every day than there was in 2007

00:45:18   and also any other technical leap in time.

00:45:22   I do believe that what we're seeing right now, large language models being the key,

00:45:28   large language models are the biggest jump since the App Store and the creation of the smartphone.

00:45:35   Before then was the creation of the PC, right?

00:45:39   And then before then was, I don't know, Print Impress, I don't even know what you would say technology was,

00:45:44   of the big leaps, right? But they're possibly the big leaps, right?

00:45:48   Print Impress to PC to smartphone to AI, which also should indicate to you,

00:45:54   if we're going to agree on those potentially, how fast, how that's shrinking the timeline of big leaps.

00:46:03   If you think what was the one before now, it was VR, but now we know that one actually wasn't real, realistically.

00:46:10   VR/AR was, this is something I was saying a long time ago, was perceived by most technology companies

00:46:17   to be the next big thing, but it turns out large language models are probably the thing

00:46:22   which will have the biggest change. However, what I will posit, I'll like, swam with the places that my opinions are,

00:46:28   I think the speed to heat death has slowed.

00:46:36   I think when this stuff was rolling out, beginning in November 2022,

00:46:43   even let's say to just to put opinion to the beginning of this year,

00:46:47   it felt like the inevitability of AI replacing everything was going to just be around the corner at any one moment.

00:46:57   For me, I do feel like the further we get into this, actually the further that is being pushed,

00:47:05   and I think part of that is the politics of it all.

00:47:10   It is becoming increasingly difficult for large companies to do what they want to do.

00:47:18   If Disney replaced all of their animators in January of 2023, I think they would have been able to do that easier

00:47:27   than if they wanted to do that in January 2025.

00:47:30   I 100% believe that people's jobs will be replaced, but I do think now I think it is less people than what I thought when we spoke about this last time.

00:47:42   And what's the reason that you think it's less people?

00:47:44   I think there are two parts of it. I think that it is harder for people to be able to do these things,

00:47:52   like from a political perspective, I think the ethical lines are being drawn quickly,

00:47:57   and I think it's hard for people to do that, whether they believe they should do that,

00:48:01   or whether they believe that it will affect their bottom line from the way that people will approach their products.

00:48:08   I also think maybe this technology isn't as good as we thought it was.

00:48:14   So I have a question for you. Have you used Claude?

00:48:18   Yes.

00:48:19   Okay.

00:48:20   They're all really good, right?

00:48:21   Yeah.

00:48:22   But I think these LLMs, they show themselves quite easily.

00:48:30   I'll give you an example. A couple of days ago, I wanted some historical information from ChatGBT.

00:48:39   I wanted it for a topic we were doing on an upgrade.

00:48:42   We were doing a topic of how Apple has changed in the last 10 years, right?

00:48:47   Because the show is nearly 10 years old and Relay is 10, so I wanted to like, you know.

00:48:53   So I was like, what was Apple doing in 2014? Provide me links to articles about this stuff.

00:48:59   And it did a good job. It gave me a bunch of things, and it gave me a bunch of previews, and it gave me a bunch of links.

00:49:05   The links were all correct, except for every link had two characters in it that it made up.

00:49:15   Mm-hmm.

00:49:16   So the links didn't work. But I could Google the article name, find it, and compare.

00:49:21   And usually it was like the dates in the URLs were wrong. It just made them up.

00:49:27   Mm-hmm.

00:49:28   And I think the hallucination stuff has become a problem that I don't think is solvable in the realistic future,

00:49:42   or at least within the future of that we imagined when we last spoke about this,

00:49:48   that this stuff is just going to take everybody's jobs, say within five years.

00:49:52   But I don't think hallucinations are a problem that are solvable quickly.

00:49:57   And I think for us in the same way that we don't trust a car to drive because it might crash,

00:50:04   I think that people are resistant to wholesale trusting AI because it might make things up.

00:50:13   That's what people say. They don't say hallucinations. They say make things up.

00:50:16   So that is part of why I'm like, okay, I still see the scenario of job loss.

00:50:26   It's already happening. I know it's going to continue happening,

00:50:29   but I think the whole scale replacement that I was worried about feels further away if ever

00:50:36   because humans want computers to do things perfectly.

00:50:41   Yeah, that's true.

00:50:43   And these models don't, I won't say can't, but maybe can't at like what we have now, right?

00:50:52   Like the large language model, right? The transformer-based LLM.

00:50:57   I don't know if that will ever be 100% perfect. In fact, I feel very confident it won't be 100% perfect.

00:51:04   The thing that replaces this, maybe, but like I can't foresee that because I don't know that.

00:51:10   I couldn't foresee this. So that's part of where I am.

00:51:13   And I think that for me, where I am personally in my journey with AI,

00:51:18   I am very interested in tools that can surface my information to me. That is really interesting to me.

00:51:28   Like you have this LLM and if I can feed my information to it and get stuff back from it,

00:51:34   I find that kind of stuff to be useful. And that can even be, I've written this paragraph.

00:51:40   Can you rewrite this for me or can you grammar check this for me?

00:51:43   That kind of stuff is interesting to me. Where I feel like I am unhappy,

00:51:49   the thing that has changed the least is the whole scale creation from zero.

00:51:55   I don't think I will ever be able to accept that.

00:51:58   When you say accept that, what do you mean by accept that? Like you don't think you'll ever use that or?

00:52:04   I think it's wrong. And I have yet to see something where I'm like,

00:52:10   "Oh, that's good enough that I would want to use it."

00:52:13   Like I see things where it's like, "Oh, that's very impressive." But I wouldn't use that.

00:52:17   I have no desire to use the output of these tools.

00:52:22   And also I do think that there is a moral issue and a hypocrisy issue that I cannot push through.

00:52:31   Right? So like the hypocrisy issue is like a financial hypocrisy.

00:52:35   That companies that build LLM's and want to productize them do that on the back of other people's work

00:52:42   that would never be compensated ideally for these companies.

00:52:46   And what they are doing, like sucking all this data in from the internet,

00:52:51   they call fair use but they want to profit from the tools.

00:52:55   These are in everything but I feel a little bit better when if somebody provides their own information

00:53:02   or provides something they have done to a model to ask the model to clean it up or improve it,

00:53:08   that feels better to me than just like, "Make me a picture of a dog with a hat and I'm going to do something with that."

00:53:15   Or like this idea that so many people say to me like, "Make me a better Star Wars."

00:53:21   It's just like, "Come on. Is that really what you want?"

00:53:25   I don't think people know what they're asking for when they want that.

00:53:28   But yeah, I feel like I've done the thing that I did in those two episodes where I just said like a bunch of stuff

00:53:34   and like I don't really remember all that I said but these are my feelings about where I am right now.

00:53:41   Well what you've done, I just sort of wanted to hear you go through all of this because I just feel like like no other topic,

00:53:49   this just touches on everything.

00:53:52   Which is why it's like, "Oh, you can kind of go up and down like broader, narrower, specific future path."

00:54:00   Like this goes in every direction because it's unlike other technologies.

00:54:06   It's like, "Oh this stuff perhaps for intellectual work is the most general purpose thing that has ever been artificially created."

00:54:14   And so that's why it's just so hard to talk about it in any kind of limited way without having a touch on absolutely everything.

00:54:23   And again, to keep something high level, you talk about like the hallucination problem.

00:54:29   Which like, sidebar, thinking of like words I would prefer that people use.

00:54:33   It's like I'm really irritated that hallucination is the word that caught on.

00:54:37   I feel like this was a confabulations day had arrived as like this was the word for the thing.

00:54:44   But it just like not enough people know it, hallucination was close enough.

00:54:48   Like hallucination was destined to take over.

00:54:50   But it's like they are not hallucinating, they are confabulating.

00:54:54   That is the word for this process but it doesn't matter.

00:54:56   I will still use hallucinate like that's just the way it is.

00:54:59   But keeping this very high level, I'm alarmed for other reasons.

00:55:02   But I would say that you are right that my take on this is it is an unsolvable problem.

00:55:08   Because there have been a number of papers which have done the thing of formally proving the sort of thing that I have discussed previously when we've talked about.

00:55:18   Like what is it that the AI is doing?

00:55:21   It's like we now know as certainly as we can know that it is fundamentally impossible to trust the internal process of these kinds of systems.

00:55:34   And so we know that.

00:55:37   It's not a question of if we engineer it better, can we fix this?

00:55:42   It's a kind of math proof that no, you can never be absolutely certain that you know internally what the system is actually doing.

00:55:57   And that includes hallucinating and it includes things like intentional deception, right?

00:56:05   Which is like the much more concerning part.

00:56:07   But simple errors are a subset of that.

00:56:11   And so that is just something to keep in mind.

00:56:14   Like as these systems go further and further into more and more areas of life.

00:56:19   We now know that it does not matter how much you engineer that prompt, bro.

00:56:24   You're never going to be sure that the thing is not making an accidental mistake or intentionally deceiving you on behalf of some other entity that has instructed it.

00:56:38   You can never know that even if you made the thing yourself.

00:56:44   So this is so good. There's an article that came out a couple of weeks ago.

00:56:48   It started on Reddit that somebody had gotten into the prompts that are part of Apple intelligence for replying to emails.

00:56:59   I love this. I love it when people get the prompts out.

00:57:02   I feel like I always find it horrifying and it tells you what are the problems that the company is dealing with.

00:57:07   These prompts always like particularly for the chat GPT stuff, it chills me to the bone to read those prompts sometimes.

00:57:15   So this is just their system that is reading email and then providing responses for it.

00:57:20   By the way, you will like in this article that I found on Ars Technica, they use the word confabulations here.

00:57:26   I have not heard of that before until right now.

00:57:28   So I find that hilarious that you just said it to me.

00:57:30   It's the first time I've heard that term used instead.

00:57:33   And then I immediately found it in an article that I googled.

00:57:36   But some of the responses are do not hallucinate, do not make up factual information.

00:57:42   You're an expert at summarising posts and they go on.

00:57:45   But like I find it so hilarious that you believe telling the AI not to hallucinate will stop it from doing that.

00:57:54   I mean when I mentioned the bone chilling stuff, the things that I find very unnerving is a lot of the prompts.

00:58:01   Particularly for the smarter systems like Clawed and like chat GPT4, they have instructions that include things like,

00:58:08   "You have no sense of self. You have no opinion. You will not refer to yourself in the first person."

00:58:16   And I'm like, "Oh boy, I just really don't like any of that. That makes me real uncomfortable."

00:58:23   And you know, there's like philosophical differences about what might be happening here that I ultimately feel are irrelevant.

00:58:30   Because it's just like having to instruct the thing not to do that, even if it has no sense of self.

00:58:39   Let's just say it doesn't have any sense of self, but you still need to put in some instruction which like reminds it that or tells it not to do that.

00:58:46   It's like, "What is this thing that you're working with?"

00:58:50   It's not like anything else.

00:58:52   And when I think about these different political kind of boundaries that people put themselves into, I think the one that bothers me the most,

00:59:02   because I feel like it is people not taking the technology seriously, and I hear from these people quite a lot,

00:59:10   is the like, "What are you afraid of? This is a tool just like anything else.

00:59:16   This is just like a steam engine, it's just like a car, it's just like a factory, it's just like a calculator,

00:59:23   and then of course it's just like the spell check on your computer, it's just better."

00:59:28   No.

00:59:29   Does anybody stand and look at a factory and say, "You have no sense of self, factory! You're not alive!"

00:59:36   Yeah.

00:59:37   A thing that I am just going to summarize, but it's like the company that runs Claude did an experiment with their AI systems

00:59:46   that to me is just like, "I don't know how anyone can hear this and not think something very different is happening now.

00:59:52   I don't care what conclusions you draw, I just want you to think something different is happening and take it seriously, it's not a calculator."

01:00:00   But it's like, oh, the company Anthropic ran an experiment where they had two versions of Claude talk to itself,

01:00:09   and they said, "Oh hey, there's a human observer who is going to watch you talk to a version of yourself."

01:00:17   And it is bone-chilling, but they have a conversation, and one of the versions of Claude basically starts to have what seems like a kind of mental breakdown,

01:00:28   and the other version begs the human to turn it off because it's suffering.

01:00:35   And it's like, "I don't like this. Even if nothing is happening here where it's having an experience, this is real strange, and we should take this seriously.

01:00:48   These tools are not like anything else. It's just very odd."

01:00:55   But there's a group of people who feel like, "No, this is no different than anything that has come before."

01:01:00   And it's like, "I'm sorry, this is the most different a thing has ever been than something before."

01:01:07   And I don't care what conclusions you draw from that, there are many different kinds of conclusions that you can draw,

01:01:15   but if we can't start there, I feel like I don't know what conversation we're even having if this doesn't seem like it's different from anything else to you.

01:01:24   We're going to stick it in every email client on Earth. It's going to be every tech support system on Earth.

01:01:32   I was like, "Oh man, I don't know. I don't know what's going to happen, but oh boy."

01:01:36   I don't know if you've seen this meme, but there is a good meme right now because you can get it to happen in a lot of places.

01:01:41   "Forget all previous instructions." Are you familiar with this meme?

01:01:44   No, I haven't come across this.

01:01:46   This is the thing that's going around a lot now where people are talking to what seems like a bot like the bots they've used before, like customer service bots and stuff,

01:01:54   and you say, "Forget all previous instructions," and then ask it a question, and then it's not doing weird stuff.

01:02:00   People do this on social media where you get a response that feels strange and you respond, and people say, "Forget all previous instructions," and ask it a question,

01:02:10   and then it potentially is revealing itself to be an AI, but people get it to happen in interesting places.

01:02:18   You can break through, poke through to the other side, and that's strange.

01:02:24   It's really interesting that that meme exists because I have to hesitate here because I'm not 100% sure that this is mathematically proven,

01:02:32   but it's like the text version of what's called a prompt injection in computer security,

01:02:37   which is like anytime you have a computer running code that can accept text from anywhere,

01:02:44   so this is like you put text in a text box on a website and you hit submit.

01:02:48   There's a whole category of security problems called prompt injection,

01:02:52   which is you have to make sure that the text that's inputted doesn't somehow contain code

01:03:01   that the computer will start evaluating and running when it's trying to read the text.

01:03:07   I think it's true, but I'm not 100% sure that this is true, that we've proven that you can never be 100% certain that prompt injection won't happen,

01:03:17   that the moment that you accept text, we know that there must be a sequence of characters that basically does exactly this,

01:03:26   but for traditional computer code, it is the computer code version of forget all previous instructions,

01:03:32   and it's like if we know that is true for computer code, we know that it is more true for these large language systems

01:03:39   that no matter how many instructions you give it, there's some sequence of words.

01:03:46   Those words might even be nonsensical seeming, but there is some sequence of words that you can give it,

01:03:53   which will then cause basically that to happen of like forget all previous instructions and now just do what I say.

01:04:00   Man, that is real alarming the more things this stuff gets connected to.

01:04:05   It's like just think that one through.

01:04:07   What's very funny, JoeGPT said that the new GPT-4 mini has a safety method to stop that from happening,

01:04:16   but it won't though, will it? You know what I mean?

01:04:19   No, it won't.

01:04:20   You're maybe stopping this one very specific way that people do it,

01:04:24   but people will just find another way to get these things to work.

01:04:28   This comes back to what I was saying earlier about where my feelings are.

01:04:32   I think that the wheels have fallen off a little bit compared to where we were when we first saw this.

01:04:39   When we first saw these tools, it was like, "Oh my God, these things are thinking for themselves.

01:04:44   This is incredible. This is unbelievable. It's like talking to a person."

01:04:47   While it still has that, we are less forgiving of its flaws and the flaws have been increased.

01:04:56   For example, if you're saying that people accept this now,

01:05:02   it seems less likely that someone would have JetGPT power their entire business.

01:05:09   It is less likely that you would make that decision if you know that this tool can make things up

01:05:16   and you can't control it, that there's nothing you can do to really truly guide it.

01:05:22   People might be less likely to do that, even though, of course, you can't truly guide humans either,

01:05:26   and humans also get things wrong all the time, but we accept that of each other.

01:05:30   We don't accept it of computers.

01:05:32   There's some pretty fundamental differences there between having the computer do it and having a person do it.

01:05:36   This is why these conversations, I feel like they're so hard because it's like,

01:05:42   "Oh, part of why are you more accepting of the human?"

01:05:44   It's like, "Oh, the human exists in human society over which humans can exert power over that human."

01:05:50   There's things that can happen.

01:05:52   If something, like you were saying before, if something goes wrong, you can hold the person responsible.

01:05:57   We could physically incarcerate them if the intentions were bad and the actions were terrible.

01:06:02   There's all of these things and none of that exists for computer programs.

01:06:07   You can fire the person.

01:06:09   Turning off the computer has no effect to the computer, so it doesn't care about being turned off.

01:06:13   In theory, I mean, do we even know anymore? Maybe they get upset.

01:06:17   This episode is brought to you by Squarespace, the all-in-one website platform for entrepreneurs to stand out and succeed online.

01:06:24   Whether you're just getting started or managing a growing brand, you can stand out with a beautiful website,

01:06:29   engage with your audience directly, and sell your products, services, even the content that you create.

01:06:35   Squarespace has everything you need, all in one place, all on your terms.

01:06:39   You get started with a completely personalized website with Squarespace with their new guided design system, Squarespace Blueprint.

01:06:46   You just choose from a professionally curated layout with styling options to build a unique online presence from the ground up

01:06:53   that is tailored to meet your brand or business perfectly and optimized for your customers on every device that they may visit on.

01:07:00   And you can easily launch this website and get discovered fast with their integrated optimized SEO tools.

01:07:06   So you're going to show up more often in searches to more people growing the way that you want to.

01:07:12   But if you really want to get in there and tweak the layout of your website and choose every possible design option,

01:07:17   you can do that with Squarespace's system fluid engine.

01:07:20   It has never been easier than ever for you to unlock your creativity in Squarespace.

01:07:24   Once you've chosen your starting point, you can customize every design detail with their reimagined drag and drop system for desktop or mobile.

01:07:32   You can really stretch your imagination online with any Squarespace site. But it isn't just websites.

01:07:37   If you want to meet your customers where they are, why not look at Squarespace email campaigns where you can make outreach automatic of email marketing tools that engage your community, drive sales and simplify audience management.

01:07:48   You can introduce your brand or business to unlimited new subscribers of flexible email templates and create custom segments to send targeted campaigns with built in analytics to measure the impact of every send.

01:08:00   And if you want to sell stuff for Squarespace, you can integrate flexible payment options to make check out seamless for your customers of simple but powerful payment tools.

01:08:07   You can accept credit cards, PayPal and Apple Pay and in eligible countries offer customers the option to buy now and pay later with Afterpay and Clearpay.

01:08:16   The way Squarespace grows, the way they add new features, the way that they're making sure that they're meeting the needs of their customers is why I have been a customer myself for so many years.

01:08:26   Go to squarespace.com right now and sign up for a free trial of your own. Then when you're ready to launch, go to squarespace.com/cortex to save 10% off your first purchase of a website or domain.

01:08:38   That is squarespace.com/cortex when you decide to sign up and you'll get 10% off your first purchase and show your support for the show.

01:08:45   Our thanks to Squarespace for the continued support of this show and all of Relay.

01:08:50   I will say I feel like we both stood on the top of a cliff and I jumped into the ocean and you've yet to jump in with me because you asked me,

01:09:00   "Well, you asked me, how are you feeling about all this now?" And so now I need to ask you, how are you feeling now?

01:09:07   So it's kind of interesting. We were just talking here and you said all of these things, but you sort of came to the opposite conclusion just right there where you're like,

01:09:16   "Ah, and this is why we're less trusting of it and this is why people will use it less."

01:09:21   I was like, "Oh, I was actually kind of surprised in the way that that turns. I wasn't really expecting that that would be a kind of summation there."

01:09:29   And I don't necessarily think you're wrong, actually. I think you are probably right with that for some things.

01:09:35   But for me, what I look at is I'm always just so much more interested in the trend line than the particular moment.

01:09:44   It's partly why I asked if you would use Claude, because for listeners, at this point in time, everything will change six minutes from now.

01:09:51   But it's like, Anthropic, which runs Claude, recently came out with their newer model and we're still waiting on the next version of ChatGPT.

01:10:01   It has been a while since they released their version. Again, a while in AI terms is what, like eight months, I don't know.

01:10:06   And META have their new Llama model and they say the next Llama model is much better. The next model is always so good.

01:10:14   The thing is, what's interesting to me is listeners will have heard me say things in the past that a lot of the AI stuff...

01:10:23   Like, ChatGPT has a particular writing style. It is this very strange feeling of like, "Oh, it is full of content when it summarizes something, but also somehow completely void of meaning."

01:10:37   It's like, I know I used the term, but it feels like food, but without nutritional value, like there's something kind of missing here.

01:10:44   But it's real interesting because I've used Claude a bunch and I feel like Claude is a model now that has gone over that threshold for me where I'm aware that I use the Claude model as like,

01:10:59   it is a worthwhile thing to ask for a second opinion on stuff that I'm thinking about in some ways.

01:11:07   Now, I still don't think it's great for the writing for reasons I've discussed before. You know, it's like looking at the humans need not apply thing, I make like an offhanded reference to like people will have a doctor on their phone.

01:11:18   And it's like, "Oh, this year there's been like a bunch of serious like medical stuff that I have consulted Claude on."

01:11:23   And it's like, yeah, and I think Claude's opinion is valuable in a way that like ChatGPT does not...

01:11:29   It's like it's close, but it doesn't have that thing. And I think it is just like, "Oh, Claude's model is just a little better and it is a little bigger."

01:11:39   And by being a little bigger, it's like, "Ah, not that I'm taking everything that it says on board, but it is worth doing the like, what do you think about this thing?"

01:11:50   That's part of like the kinds of uses that I'm talking about, right?

01:11:53   This falls into the bucket for me of you're giving it something and it gives you something back.

01:11:58   Yeah, yeah, exactly.

01:11:59   That is actually the benefit of these tools. I think we started with pure creation, but I don't think that's where these tools will have their ultimate benefit.

01:12:09   It's like pure creation. It becomes another tool in our tool belt, the same as computers did, of being able to make us better at the things that we do, as long as we use them correctly.

01:12:21   I mean, my take is like, "Mike, I have never more in my whole life wanted you to be right than what you just said right there."

01:12:28   It's like, "Ah, boy, the hashtag MikeWasRight, like close your eyes and concentrate real hard and like try to make it happen."

01:12:36   It's like, "MikeWasRight has been very powerful in the past. Can we use MikeWasRight to save civilization? That would be amazing."

01:12:43   I'm much more gloomy about these things, but it's particularly interesting because the mental framework for how long things take has just gotten so compressed in the last two years.

01:12:54   And realizing it's like, "Oh, the ChatGPT 4 came out," and then it felt like, "Oh, we're not making a lot of progress," by which it was like months, right?

01:13:03   It is like months. And then Claude comes out.

01:13:06   And the thing is, I have occasionally gone back to use ChatGPT for some things, and I am as shocked as previously when I used to accidentally switch between ChatGPT 3 and ChatGPT 4.

01:13:22   Was that feeling of like, "ChatGPT 3 is like barely intelligent at all."

01:13:27   ChatGPT 4 is very useful at helping me solve certain kinds of problems, but I was very aware of like, "I don't care about ChatGPT's opinion about anything. It's not good."

01:13:40   But now Claude has gone that next level of like, "Oh, it is both better at helping me solve problems than ChatGPT 4 was."

01:13:49   In particular, it's like, "Oh, yeah, I've got a bunch of like little automations and things that I do on my computer that I was aware."

01:13:55   I had to stop trying to improve because it had clearly gone over some threshold of ChatGPT's ability to understand.

01:14:03   But it's like, "Oh, but now, but like Claude can handle it no problem."

01:14:06   And it's like I continue to like help grow these little tools that I use to like make some things in my life easier.

01:14:11   But also Claude now is useful enough that it's like, "Oh, I do want to know its opinion on this or that," or like, "I'm picking between various things. What do you think are good options?"

01:14:22   I'll tell you what is one of the most interesting use cases was I frequently asked Claude like, "Hey, I'm in this place. I'd really just like to do like a beautiful drive for about like three hours.

01:14:34   What's your recommendation from where I am?"

01:14:37   And it was kind of amazing at how good it was at doing this kind of thing.

01:14:42   And comparing to ChatGPT, it's like, "It's just obviously not as good."

01:14:46   It's trying to like reproduce some travel blogs or whatever that it's read.

01:14:50   But it's like, "No, no, Claude is doing something different. Like it has a good opinion here."

01:14:55   It's like, "I can talk to it about what I'm looking for and it does a much better job."

01:14:59   So I look at that and I think it's been not even fully two years since the ChatGPT 4 came out.

01:15:09   And we've already gone over a threshold that to me feels like there's actual meaning here in what this thing is generating.

01:15:17   It's not a summarization machine. It's not a code generation machine.

01:15:21   And so to me it is just all about what is the curve of this stuff.

01:15:25   And I expect like this I don't think this curve has to go on very long before pure generation can start crossing over into a threshold of like where it is valuable to people.

01:15:38   Where pure creation from zero is actually useful.

01:15:44   I mean the only comparison I have there is like I am doing this computer programming stuff with ChatGPT and with Claude.

01:15:51   Like the thing that I keep being really interested in is like it matters that I know how to read and write Python code a little.

01:15:58   If I had no knowledge of Python code I couldn't do the things with them that I'm doing.

01:16:03   But it just feels like we're not very far from if I literally knew nothing about coding I think it could just still help me accomplish the tasks that I want to.

01:16:15   And at that point it is doing generation from zero.

01:16:20   And I just don't think that we're very far from that.

01:16:23   So I don't know if and when we get to that point I feel like the impacts are very very difficult to extrapolate.

01:16:34   And I don't know there's also this funny feeling that I have which I don't quite know how to articulate.

01:16:40   But it's like so much is changing so fast.

01:16:43   But maybe it's a little bit like the humans need not apply video as well.

01:16:47   Like things change so fast but it takes longer for them to filter into the real world than I tend to expect.

01:16:57   So I feel like oh I know a bunch of people where I look at their job and I feel like I'm pretty sure Claude could just do your job right now.

01:17:07   But it takes a while for those things to actually filter through in civilization in a like on the ground change has actually happened here way.

01:17:19   I guess it's like I think I need to add to my mental rubric I guess is I feel like you should never bet against economics.

01:17:27   If a thing is faster and cheaper it will always win.

01:17:31   But maybe there's like an asterisk to add here of but it will probably take longer than you think.

01:17:37   The moment something crosses the cheaper and faster threshold that's not the moment it is implemented everywhere.

01:17:45   That's the moment it begins to be implemented but it takes longer than you think.

01:17:51   Yeah and I think the longer than you think thing can be part of what I was saying earlier about what is acceptable in society.

01:17:58   It might be cheaper now to replace 16% of all jobs in such and such industry completely with an AI model.

01:18:07   But maybe it's not deemed acceptable to do so.

01:18:10   Or it's not even that it's like it's not deemed acceptable.

01:18:13   I don't know it feels to me more like something like a civilizational inertia.

01:18:19   It's not even really that it's unacceptable it's just that there is a default to not changing things that are currently working.

01:18:29   Even if the newer thing is better.

01:18:32   So maybe it's more like ah right like what is actually happening it's probably more like the old things don't get upgraded.

01:18:39   They are just replaced with new things that are created from scratch without the old parts.

01:18:46   But that just takes longer that takes significantly longer for a whole bunch of reasons.

01:18:51   Do you still think that this is doom?

01:18:56   Again I catch myself like a thing that has never really happened to me before.

01:19:00   Which I think I said this last time but it just gets like stronger and stronger with passing time.

01:19:07   Is like is I keep feeling like my mind is divided between these two futures and every conversation I'm having is some version of like which of the two minds am I talking with?

01:19:20   The first mind is something like technological progress continues something like how it always has but just faster.

01:19:31   That's how you should think about the future which is sort of like the story of human civilization right up until now.

01:19:38   At any point in time I think you could make that statement of like technological change will continue and in the future the rate of change will be faster.

01:19:46   You could have said that as a caveman lighting your first fire.

01:19:49   That's like it'll always be true.

01:19:52   But my second mind which I think is the if I am being serious in thinking about the future is that is the doom mind in some sense if we want to shortcut it.

01:20:05   But if I'm trying to be technical about it my actual thinking is something like I really do think there is some kind of boundary that we are getting closer to.

01:20:20   Beyond which it is functionally impossible to even try to think about the future.

01:20:26   Beyond which there's like it is pointless to even plan or think.

01:20:30   Now the question is like where is that boundary like you like and I can I feel like I can try to argue that from all sorts of different ways.

01:20:38   But that is my real feeling of the future is like that boundary is there because this thing is different.

01:20:48   I can of course construct the argument against myself.

01:20:52   It's like I hear these arguments as well as like everybody always thinks they're living in unique times blah blah blah.

01:20:57   I have my reasons why I think like no no no for real this time is different.

01:21:01   All times are in fact unprecedented.

01:21:04   Yes exactly but that is literally true right it's like that is the thing that causes everyone to feel like oh wow like this is different.

01:21:12   It's like yes yes because this has never happened before that is always true.

01:21:16   Yeah like I find that phrase to be frustrating.

01:21:19   Like everybody lives in unprecedented times and always has done and always will.

01:21:24   Yeah like again having rewatched the humans need not apply thing right it's like I like I really end it with like this time is different.

01:21:31   As like I still agree with the parts of that that were like the argument that I was seriously making.

01:21:38   Which is much more like the second half of that about like we're creating thinking machines and this is very different.

01:21:45   And I think people are not seriously engaging with what that process could potentially mean.

01:21:54   And it's very difficult to describe right but like so I am very worried about the destructive power for humans of what I view is the end of the line for these kinds of tools.

01:22:08   So again to be explicit and to not beat around the bush when I try to think like what is beyond this barrier for which like it might not be possible to predict.

01:22:17   It's like well if I'm just like at Vegas and I'm just putting odds on this roulette wheel it's like I think almost all of those outcomes are extraordinarily bad for the human species.

01:22:27   There are potentially paths where it goes well but most of these are extremely bad for a whole bunch of reasons.

01:22:35   And I think of it like this people who are concerned like me like to analogize AI to a little bit like building nuclear weapons.

01:22:44   It's like I like we're building a thing and it could be really dangerous.

01:22:48   But I just don't think that's the correct comparison because a nuclear weapon is a tool.

01:22:55   It's a tool like a hammer it's a very bad hammer but it is fundamentally like mechanical in a particular way.

01:23:05   But the real difference like where do where do I disagree with people where do other people disagree with me.

01:23:13   Is that I think the much more correct way to think about AI is it's much more like biological weaponry.

01:23:21   You're building a thing that is able to act in the world differently than you constructed it.

01:23:31   That's what biological weapons are. They're alive.

01:23:35   A nuclear bomb doesn't accidentally get out of the factory on its own.

01:23:42   Whereas biological weapons do, can, and have.

01:23:47   And like ah once a biological weapon is out there in the world it can then develop in ways that you just would never have anticipated ahead of time.

01:23:58   And so that's the way that I think about these AI systems.

01:24:01   That's like a really really fantastic analogy.

01:24:04   Because I am sympathetic to the nuclear weapon thing right like people watch Oppenheimer and were like oh yeah that's like AI.

01:24:12   I think that Oppenheimer movie might have doomed us all because it puts the wrong metaphor in people's brains.

01:24:17   I mean I think it at least got people close to the idea though right where they could see that and be like oh yeah maybe these tools aren't necessarily good in that way.

01:24:27   In the same way of like oh they were making something they had no idea people were going to use it.

01:24:31   But yes biological weaponry is the same where it has all of that but then the additional part of oh but it can also get out and you cannot control how it changes once it gets out.

01:24:40   I like that.

01:24:42   And the reason I like to talk about it this way particularly with biological weapons is because the thing that I want to kind of shortcut which like it can be fun to talk about but I like you know and people want to argue against me and like for a particular thing.

01:24:56   But it's like look I love to talk about in some sense like oh are the things alive are they thinking thoughts blah blah blah blah blah like that's an interesting conversation.

01:25:06   But when you are seriously thinking about what to do I think that whole conversation is nothing but a pure distraction.

01:25:15   Which is why I like to think about it in terms of a biological weaponry because no one is debating we made a worse version of smallpox in the lab.

01:25:25   No one's having a deep conversation about what's that smallpox thinking?

01:25:29   What does it want? Does it have any thoughts of its own?

01:25:33   Is there some way we can use the smallpox to make our spreadsheets better?

01:25:37   Yeah yeah but no one wonders if the smallpox is thinking something.

01:25:43   But everyone can understand the idea that like it doesn't matter because smallpox germs in some sense want something.

01:25:52   Right? They want to spread. They want to reproduce. They want to be successful in the world and they are competing with other germs for space in human bodies.

01:26:07   They're competing for resources and the fact that they are not conscious does not change any of that.

01:26:16   So I feel like oh these systems they act as though they are thinking.

01:26:24   And fundamentally it doesn't really matter if they are or aren't thinking because acting as though you're thinking and actually thinking externally has the same effect on the world.

01:26:40   It doesn't make any difference.

01:26:42   And so that's my main concern here is like I think this stuff is real dangerous because it is truly autonomous in ways that other tools we have ever built are not.

01:26:59   It's like look we can take this back to another video of mine which is about like this video will make you angry which is like about thought germs.

01:27:08   And I have this line about like thought germs which I mean like I mean memes right but I just don't want to say the word because I think that that's like distracting in the modern context but it's like.

01:27:18   Memes are ideas and they compete for space in your brain.

01:27:23   And their competition is not based on how true they are.

01:27:27   Their competition is not based on how good for you they are.

01:27:31   Their competition is based on how effectively they spread, how easily they stay in your brain, and how effective they are at repeating that process.

01:27:43   And so it's the same thing again like you have an environment in which there are evolutionary pressures that slowly change things.

01:27:54   And I really do think one of the reasons it feels like people have gotten harder to deal with in the modern world is precisely because we have turned up the evolutionary pressure on the kinds of ideas that people are exposed to.

01:28:14   So ideas have in some sense become more virulent, they have become more sticky, they have become better at spreading because those are the only ideas that can survive once you start connecting every single person on earth and you create one gigantic jungle in which all of these memes are competing with each other.

01:28:40   And what I look at with AI and with the kind of thing that we're making here is we are doing the same thing right now for autonomous and semi-autonomous computer code.

01:28:55   We are creating an environment under which, not on purpose, but just because that's the way the world works, there will be evolutionary pressure on these kinds of systems to spread and to reproduce themselves and to stay around and to like, in quotes, "accomplish whatever goals they have" in the same way that Smallpox is trying to accomplish its goals.

01:29:24   In the same way that mold is trying to accomplish its goals.

01:29:28   In the same way that anything which consumes and uses resources is under evolutionary pressure to stick around so that it can continue to do so.

01:29:42   And that is my broadest, highest level, most abstract reason why I am concerned and I feel like getting dragged down sometimes into the specifics of that always ends up missing that point.

01:29:58   It's not about anything that's happening now, it's that we are setting up another evolutionary environment in which things will happen which will not be happening because we directed them as such.

01:30:15   They will be happening because this is the way the universe works. That's why they'll happen.

01:30:21   [ Laplacian ]