PodSearch

ATP

624: Do Less Math in Computers

 

00:00:00   So, I need to issue an apology.

00:00:02   Okay.

00:00:03   I'd like to apologize to AirPods Pro Adaptive Noise Cancellation Mode.

00:00:08   A few months ago, we were talking about the AirPods Pro Noise Cancelling Modes, and I had

00:00:16   said I had tried the adaptive mode for a little while, and I didn't like it, and I switched

00:00:22   back and I disabled it from my cycling when you hold down the stem, because it blocked

00:00:28   that too much street noise when I was walking around Manhattan, and I just didn't like it.

00:00:31   I don't entirely agree with you, but I never got to the point that I wanted it removed from

00:00:37   my cycle of options, but I find I very rarely choose adaptive.

00:00:42   Typically, I'm either doing transparency or full-on cancellation.

00:00:45   And I'm pretty sure the way we talked about it was because I got the AirPods 4, and I was

00:00:50   asking you guys about the different modes that they do, because I didn't know which modes

00:00:54   and what they were good for.

00:00:55   Or I think, at least at that point, you reiterated your opinion that you didn't like adaptive.

00:00:58   Yeah.

00:00:59   And we got a couple of notes from listeners basically saying, try it again, it's really

00:01:03   good.

00:01:03   And I tried it again for like, you know, a couple of days, and I hated it, and I went back.

00:01:08   Well, in the meantime, I tried it again, like another time, and I just left it on.

00:01:19   And I realized over the last week or so that I have just left it on for like months now.

00:01:26   And it's fine.

00:01:30   I don't really think about it anymore.

00:01:32   I don't really change modes anymore, almost ever.

00:01:37   And so I actually have come around that whatever the, you know, the current, the new version

00:01:43   or the current version of adaptive noise cancellation mode is indeed good enough to use as my main

00:01:50   mode almost all the time.

00:01:51   There's, it's, I've almost never switched it, like since the most recent try, I almost never

00:01:56   switched it to any other mode now.

00:01:58   It just adjusts and it's fine.

00:02:02   Uh, so yeah, sorry adaptive, adaptive mode.

00:02:04   Uh, I was wrong about you and you've won me over.

00:02:07   We released a new ATP member special.

00:02:11   Uh, so this is, if you recall, is one of the perks of being an ATP member, which you can,

00:02:16   uh, join by going to ATP.fm slash join.

00:02:19   Uh, once a month we try really hard.

00:02:21   We've been pretty consistent so far.

00:02:23   Uh, once a month we do some sort of bonus content that is not really canon, I guess you

00:02:27   could say.

00:02:28   It's just usually way off in a, in a different world, so to speak, in this case, kind of

00:02:32   literally.

00:02:32   Uh, and sometimes we do what we call ATP movie club.

00:02:35   And this month we returned to ATP movie club and did star Trek for the voyage home.

00:02:41   John, can you tell me why and how we landed on this particular topic?

00:02:45   Marco made a comment on a, uh, earlier episode that anyone listening would have thought was

00:02:50   Marco making a reference to a famous scene in a star Trek movie, but it turns out Marco has

00:02:54   never seen any star Trek movies or any star Trek, anything.

00:02:57   And I said, Oh, well then we have to do that as a member special.

00:03:00   We have to show him the movie that he was unknowingly referencing and that Casey and

00:03:03   I were making jokes about.

00:03:04   And that's exactly what we did.

00:03:05   Uh, there's no other reason than that.

00:03:07   Uh, and it was fun.

00:03:09   So check it out.

00:03:09   Star Trek for the voyage home.

00:03:12   Yeah.

00:03:12   It did not go the way I expected.

00:03:14   And, uh, it is definitely worth checking out.

00:03:16   And, uh, we will remind you, like I said, ATP.fm slash join.

00:03:20   Uh, you can do the Syracuse approach, which is wrong and join for a month and then, you

00:03:26   know, cancel your membership, which is very easy to do.

00:03:28   And you can slurp, you know, slurp up all the, uh, member specials while during that month.

00:03:33   And then you can, you know, walk away and listen to them all, or you can do the right thing,

00:03:38   which is join and, and just let it ride.

00:03:40   You can join us for a month, a year, however long as you, as you feel and just enjoy all

00:03:45   of the member specials and the bootleg and the ad-free shows and so on and so forth.

00:03:50   So get overtime and overtime.

00:03:52   Thank you.

00:03:52   I almost forgot.

00:03:53   So ATP.fm slash join, uh, John, we talked about the Honda prologue and how it had been

00:04:01   outselling, uh, equivalent GM vehicles that are based on the same platform.

00:04:05   Why do you think that is?

00:04:06   A lot of people wrote it, including Matt Rigby to tell us that they think it's because Honda

00:04:10   Honda has a car play and the GM cars do not.

00:04:13   I'm not sure how much of a factor that is.

00:04:15   I still feel like the Honda brand is the bigger factor, but that is a factor.

00:04:19   Uh, the GM ones, uh, don't have car play because GM, uh, wants that subscription revenue and does

00:04:24   not want to have car play.

00:04:25   So yeah, throw that in the mix.

00:04:28   Yeah.

00:04:28   In my head canon, that is 100% the reason that may not be reality, but in my head canon, that's

00:04:33   why.

00:04:33   And speaking of car play, uh, Apple is still trying to make fetch happen.

00:04:37   Uh, they insist that next generation car play is still a thing.

00:04:42   So reading from Mac rumors, there was a statement from Apple.

00:04:45   Uh, we continue to work closely with several automakers, enabling them to showcase their unique

00:04:50   brand and visual design philosophies in the next generation of car play.

00:04:53   Each car brand will share more details as they near the announcements of their models that will

00:04:57   support the next generation of car play.

00:04:58   So Mac rumors ads, Apple also remains committed to its current car play platform and said it

00:05:04   is available in over 98% of new cars sold in the U S over the past few years.

00:05:07   Apple previously said committed car makers included Acura, Audi, Ford, Honda, Infiniti, Jaguar,

00:05:13   Land Rover, Lincoln, Mercedes-Benz, Nissan, Polestar, Porsche, or excuse me, Porsche, Renault,

00:05:18   and Volvo in December, 2023 asked Martin and Porsche previewed their next generation car

00:05:24   play designs, but have yet to deliver.

00:05:26   It is unclear which car makers are currently working with Apple.

00:05:29   Why, why isn't Ferrari on that list?

00:05:31   Isn't Q still on the board?

00:05:32   Yeah, I don't know that list.

00:05:33   I'm not, is that the list of next generation car?

00:05:35   It's got to be the list of next generation car.

00:05:37   I guess they're committed to it, but like things happen so slowly in the car world.

00:05:40   I don't understand why Apple made all these pronouncements that 2024 was going to be the

00:05:44   year.

00:05:44   2024 was not the year, but Apple just wanted to make a statement, an official statement

00:05:48   saying next generation car play is still a thing and it's still coming someday.

00:05:53   That's the nature of the name next generation.

00:05:54   It's always off in the distance.

00:05:56   It's the next, it's not the current generation.

00:05:57   It's the next generation.

00:05:59   Someday it will arrive and you'll be sure to hear about it here when the first car ships

00:06:03   with it because it's definitely going to be weird.

00:06:05   But right now Apple says it is still a thing.

00:06:07   So just be patient, I guess.

00:06:10   It's the race between next generation car play and a new monitor.

00:06:12   Now, John, can we get a commitment from you that if enough members join at atp.fm slash

00:06:19   join, that you will purchase the first car with the car play to situation within it?

00:06:25   I don't think we have enough people who listen to the show to make that possible, even if

00:06:29   they all converted into members.

00:06:30   So with that defeatist attitude, this is never going to work.

00:06:33   Yeah, that's expensive, but I'll definitely read articles about it and watch YouTube videos

00:06:37   about it and talk about it on the show.

00:06:39   We all know that even if somebody handed you a million dollars with which to buy a car,

00:06:44   you are incapable of buying anything that is not Honda.

00:06:47   If the only thing I could do with a million dollars was buy one of these cars, I would do it.

00:06:51   Yeah, you would wait for the Honda version, though.

00:06:52   You would not buy anything but a Honda.

00:06:54   One of these days.

00:06:55   Well, Honda's got their ASIMO operating system.

00:06:57   I don't know what they're doing with that.

00:06:59   It's not even clear.

00:07:00   Are they using Android Automotive and ASIMO is just something like on top of it or alongside

00:07:04   it?

00:07:04   Anyway, we'll find out when the Acura RDX EV comes out with the ASIMO operating system in

00:07:09   2026, if I recall correctly.

00:07:11   Yeah, maybe.

00:07:12   Tell me about handhold gaming PCs.

00:07:16   Is this something that's relevant to your world?

00:07:18   Are you interested in this?

00:07:19   Actually, this might even be better as a Marco question.

00:07:21   I don't know.

00:07:22   Acura RSX.

00:07:23   Sorry for that.

00:07:24   Yeah, so we talked about this in Overtime in a past episode where we talked about the

00:07:28   Switch 2, and I mentioned that there has been a lot of increased activity in the realm of

00:07:34   handheld gaming PCs because the technology is available for it.

00:07:38   You can make low-power, high-performing processors.

00:07:41   They can play, quote-unquote, PC games pretty well, well enough to be on a little screen that

00:07:46   you can hold on your hand.

00:07:48   Another reason why more of these are appearing is an article from The Verge from earlier in

00:07:54   the month.

00:07:54   Valve will officially let you install SteamOS on other handhelds as soon as this April.

00:07:59   So Steam Deck is Valve's handheld thing, and it runs a variant of Linux that has some libraries

00:08:04   to let it run Windows games called SteamOS, and Valve has said for a while that they were

00:08:09   always supposed to license SteamOS to other manufacturers.

00:08:11   Now they're doing that.

00:08:12   Lenovo is going to ship the first third-party SteamOS handheld in May, and supposedly it will

00:08:17   let people install SteamOS and other handhelds even sooner than that thing ships.

00:08:22   So Valve is making a play here to say, we don't just want to sell the Steam Deck.

00:08:26   We want to be like the Microsoft of handheld gaming PCs.

00:08:29   If any, quote-unquote, PC manufacturer wants to make a handheld gaming platform, and there's

00:08:33   a bunch of them.

00:08:33   I think Asus makes one.

00:08:34   Lenovo is going to make one.

00:08:36   There's a couple of the manufacturers that I can't remember.

00:08:38   And if you don't want to literally run Windows on it, you can run SteamOS, which is just Linux

00:08:43   that runs Windows games with a compatibility layer or whatever.

00:08:46   We'll see how this works for them.

00:08:47   I know the Steam Deck is very popular, but not as popular as the Switch 2, I imagine.

00:08:52   But yeah, it could be like, you know, so this is something we kind of accept in the realm

00:08:55   of games that you play while sitting in a chair, that there's PC gaming, and then there's

00:09:00   console gaming.

00:09:01   And there's sort of a rivalry there, but it's like, oh, well, console gaming, there's a handful

00:09:06   of consoles, very few as time goes on, it seems.

00:09:09   But PCs, there's a million PCs.

00:09:11   You can build your own gaming PC.

00:09:13   You can buy gaming PCs from anybody.

00:09:14   You can get any old PC.

00:09:16   And this is trying to make that happen in the world of handheld as well, because historically

00:09:20   it's like, well, handheld, there's no equivalent.

00:09:22   Handhelds, you get it from one of the console makers, or that's it.

00:09:25   Like, there's no handheld PC, but now they're like, oh, well, you can get a Switch or maybe

00:09:29   that weird PlayStation thing that just remote plays to your PlayStation 4 or 5, or you can

00:09:34   get any one of these umpteen handheld gaming PCs.

00:09:37   The twist here is that if Valve has its way, they'll be running Linux instead of Windows, but

00:09:41   they'll still be running, quote unquote, Windows games.

00:09:43   So, as usual, everything in the PC world is a little bit of a mess, but also kind of

00:09:47   exciting.

00:09:47   Very good.

00:09:49   Apparently, Ryan London is out to troll you.

00:09:53   Can you tell me more about this?

00:09:54   So excited.

00:09:55   Someone had sent me something, they talked to, I think I mentioned on the show, they talked

00:09:59   to Ryan London back in December through customer service, and the customer service person said,

00:10:03   oh, I know you want one of our leather cases that has the little sapphire button for the

00:10:08   camera control.

00:10:08   Well, we're totally going to make that, should be ready by mid-December.

00:10:10   That didn't happen, right?

00:10:12   But here we are in mid-January, or at the end of January, and lo and behold, the Ryan

00:10:16   London website has a big banner on the top.

00:10:18   It's like, hey, here it is, leather cases with a sapphire button for the camera control.

00:10:22   New, new, click through, buy one now.

00:10:24   And I did exactly that.

00:10:25   And I clicked through and bought so quickly that I didn't realize until I saw the receipt

00:10:29   screen that what they're selling with the sapphire camera control button is their variant

00:10:36   of the leather case that has a metal ring around the cameras.

00:10:39   And I do not like that.

00:10:40   So I had to cancel that order.

00:10:42   I'm like, well, I'll just cancel that order and I'll go buy the right one.

00:10:45   It's very easy to get confused on the website.

00:10:47   Some of the pictures look very similar, but no, I don't want them with the metal ring.

00:10:51   I want the one that just has smooth leather, smooth leather lump around the cameras.

00:10:55   It looks the same as the bull strap one and the 17 other manufacturers that sell the same

00:11:00   case under different brand names.

00:11:02   But unfortunately, Ryan London is not selling the one with the leather lump and the sapphire

00:11:07   thing.

00:11:07   The only one that has a sapphire button is the one with the metal ring around it right

00:11:10   now.

00:11:11   I'm sad and they are teasing me.

00:11:13   I was so excited for a moment and then I wasn't.

00:11:16   Um, so I'm, well, first of all, I'm glad somebody is, you know, they're not just gonna say, well,

00:11:19   we'll wait until the iPhone 17.

00:11:20   No sense in changing our cases now.

00:11:22   They actually are changing their case for the 16.

00:11:25   And, you know, I'm going to have the 16 for another like year and a half, right?

00:11:28   So I'm, I'm ready to buy a second case for this thing.

00:11:31   But, uh, today is not that day.

00:11:33   Ryan London, you fooled me.

00:11:35   I'm so sorry for your loss.

00:11:37   I don't understand the connection.

00:11:39   Like why only the one with the metal ring is that just didn't get around to the other

00:11:42   one.

00:11:43   Do they have too much of the ones without the metal ring in stock and they haven't sold

00:11:45   through it?

00:11:46   I don't know what the deal is, but please, Ryan London, convert all your cases.

00:11:50   Don't like, it's an option.

00:11:51   Like when you go to the one with the metal thing, you can have personalization.

00:11:54   Yes, no.

00:11:55   And camera cut out or camera control, either a cutout or a sapphire crystal button.

00:11:59   Sapphire crystal button should be an option to all your cases.

00:12:01   Please make that.

00:12:02   So I will buy one immediately.

00:12:03   But this doesn't matter anyway, because it's not a naked bottom or whatever you call it.

00:12:07   No, it is.

00:12:08   It totally, it is, right?

00:12:09   Not the one you put in the show notes.

00:12:10   Well, maybe that's another reason I shouldn't buy this one.

00:12:13   I'm looking at the black one in the link you put in the document.

00:12:17   Yeah, yeah.

00:12:17   The, maybe the metal one.

00:12:18   I think the metal one doesn't have the, uh, maybe that's another problem with the metal

00:12:21   one that I totally didn't notice.

00:12:22   Um, yeah, this is no good for you.

00:12:24   I know all that, but I don't want the metal thing anyway.

00:12:26   Like I just, I want the one that doesn't have, anyway, I'm waiting.

00:12:30   I'm still waiting.

00:12:31   They have the technology.

00:12:32   They have the buttons.

00:12:33   They can make them.

00:12:34   Just not yet.

00:12:35   Yeah.

00:12:35   It's definitely the entire case industry that's wrong.

00:12:38   Not you.

00:12:38   No, everyone, everyone wants the sapphire buttons.

00:12:40   It's just that the case manufacturers couldn't make them, you know, or didn't know how to

00:12:43   make them yet.

00:12:44   And so they all did cutouts and now they're switching.

00:12:46   Yeah.

00:12:47   They still sell the naked bottom one.

00:12:48   It just cut out is the only option on it right now.

00:12:50   I was mostly referring to the naked bottom part, but that's okay.

00:12:53   Uh, speaking of being wrong about things, hey, uh, tell me how much RAM do you have in

00:12:56   your computer again?

00:12:57   More than 122, uh, 92 megabytes, 192 megabytes would have been a lot back in the classic Mac

00:13:03   last days on my platinum colored, uh, plastic Macs.

00:13:07   But alas, on an episode where I was talking about, uh, the performance of my, uh, powerful

00:13:13   computer, uh, trying to scroll a list of items, I kept saying that it had either 192 megabytes

00:13:18   of RAM, which is way too little, or 192 gigabytes of RAM, which is way too much.

00:13:22   In fact, my Mac has 96 gigabytes of RAM.

00:13:25   I very often forget the exact number because as I mentioned, when I ordered the computer and

00:13:29   later, when I talked about it, 96 gigs of RAM at the current point in time is enough for

00:13:33   me to essentially forget that I even have RAM.

00:13:35   Nothing I do, nothing I do exhausts it or stresses it.

00:13:40   I could run the computer for a month at a time.

00:13:42   Oh, look at activity monitor.

00:13:43   No swap is in use.

00:13:44   Uh, 96 gigs is adequate for my current needs, but I do not have 192 gigabytes and 192 megabytes

00:13:51   would not be enough.

00:13:52   Fair enough.

00:13:54   All right.

00:13:54   Uh, tell me about what's going on with your scrolling adventure.

00:13:57   Yeah.

00:13:57   Last time we were talking about, um, app kit versus Swift UI and then, uh, with a side tangent

00:14:04   into web kit to see how smooth that scrolling is.

00:14:06   And there was a lot of, uh, feedback about that on Mastodon and through email, a lot of

00:14:10   people making demo apps saying, I don't know what your problem is.

00:14:12   App kit is plenty fast for me.

00:14:14   Uh, maybe I didn't emphasize it enough when we discussed it, but the app, once I converted

00:14:19   to app kit, the performance was good.

00:14:20   Like it wasn't bad anymore.

00:14:21   Like it was fine, right?

00:14:23   It's just that web kit, I was so impressed at how smooth it was.

00:14:25   And I still felt like my app kit version was not quite as smooth as the web kit version.

00:14:30   Uh, and that annoyed me.

00:14:32   I felt, felt like it should be much better, noticeably better.

00:14:34   Instead, it was like the same or maybe slightly worse.

00:14:37   Right.

00:14:37   But it was, but it was fine.

00:14:38   It was adequate.

00:14:39   I wasn't worried about the performance anymore, but so many people were making demo apps and

00:14:42   asking me about things.

00:14:43   Uh, one of the things somebody mentioned was, Hey, are you using NS cell or, or, uh, view

00:14:49   based tables, uh, for a little bit of background, NS table view is a really old class back from

00:14:53   the next days, I think, or if not very close to then.

00:14:56   Um, and it was originally designed for much less powerful computers, uh, with a special class

00:15:02   called NS cell that's used to populate each cell in the table.

00:15:07   And NS cell is like a lightweight thing that it's like, it's not a full blown NS view that

00:15:11   could have anything in it.

00:15:12   It's just a very small lightweight thing because we know you're just going to be a table cell.

00:15:16   You're not going to be some arbitrary view with it.

00:15:18   You're probably just going to show some text or something like a number or maybe like an

00:15:21   image or something.

00:15:21   But it's a very limited thing.

00:15:23   Uh, and as time went on and what was then known as Coco development, people were like,

00:15:27   Oh, NS table view.

00:15:28   It's so annoying with stupid NS cells.

00:15:30   I have all these NS views in my app, but I can't use them with tables.

00:15:33   I just want to put my NS views into the tables, but I have to convert everything.

00:15:36   So to wedge it into these NS cells and it's very limiting and it's annoying.

00:15:39   And that performance enhancement is no longer useful.

00:15:43   Uh, I wish they would get rid of NS cell.

00:15:45   So many, many years ago, eventually Apple said, okay, now you can make NS table views and you

00:15:50   can just stick plain old NS views inside them.

00:15:52   You don't have to deal with NS cell.

00:15:54   You've got NS views in the rest of your app.

00:15:56   If you just want to put them in a table, you just stick them into a cell and they'll show

00:16:00   they're, uh, done and done so much so that the cell based NS table view has been deprecated

00:16:06   since I think Mac OS 10.10.

00:16:08   I don't even remember what version that was.

00:16:10   Is that Yosemite?

00:16:11   I don't know.

00:16:11   Some, maybe that was earlier than that, but anyway, it's been deprecated for a while.

00:16:15   Um, but people were saying, uh, asking me, are you using view based or cell based?

00:16:19   And I said, I'm using view based because like, what, what year is it?

00:16:23   Like we should be using view based, right?

00:16:25   Apple promoted it to WWC.

00:16:26   They said, it's time to get a ditch NS cell.

00:16:28   I'd heard all these bad things about NS cell.

00:16:30   Um, and you know, it's, it's deprecated, but I said, you know, fine.

00:16:34   Like I'll do implementation number six or 5.5.

00:16:37   I converted my NS table view to use NS cell instead of NS view just to see if it would make

00:16:44   any difference.

00:16:44   And because I, why not?

00:16:46   Right.

00:16:47   And the thing is it did make a little bit of a difference.

00:16:50   And I had to do the thing where like two, two copies of the app running side by side to

00:16:54   be like, am I imagining things?

00:16:56   Scroll, scroll.

00:16:57   Cause like, like I said, the, the NS table view one with NS views is fine.

00:17:01   Like that you, you'd look at it and you would think there's nothing wrong with it, but I'm,

00:17:03   I've been obsessing over for so long.

00:17:05   Now it's like, scroll this one.

00:17:06   Now scroll that one.

00:17:07   Now see, does it feel, can I, can I move, can I move the pointer like off the scroll thumb by

00:17:12   I'm like, if I shake it real fast, are they the same?

00:17:15   But I was like, I swear the NS cell one is a little bit better.

00:17:18   And I didn't know what to do with that.

00:17:21   I'm like, well, I've re-implemented an NS cell and it's fine.

00:17:24   Maybe I'll just leave it like that.

00:17:25   But guess what?

00:17:26   NS cell is annoying.

00:17:27   Everyone was right.

00:17:29   They're like, it makes the code more complicated and annoying.

00:17:31   And it's just like, well, but I already did it, but it's kind of gross.

00:17:36   Like, what should I do?

00:17:37   Like it is better.

00:17:38   I wish it wasn't better, but I can tell that it's better.

00:17:41   So instead of what I decided to do was spend an entire day trying to figure out why is

00:17:45   it better?

00:17:45   Well, like, and I wish I was better at instruments.

00:17:48   I'm, I'm not good at instruments.

00:17:49   I've watched all the WRC sessions.

00:17:51   I'm still not good at it.

00:17:52   I wish I was better at instruments.

00:17:53   Someone, you know, like if you, if you are good at the like performance analysis tools,

00:17:58   it makes your life so much easier.

00:17:59   I know this from my career with performance analysis tools that I did know how to use,

00:18:04   but this is not one of them.

00:18:06   So I was just like, I'll figure, I have two, I have the code for both of them here.

00:18:11   Surely I can figure out why are you ever so slightly slower than you over here?

00:18:15   And so I just went methodically through it and tried to figure out what is making this

00:18:20   slower.

00:18:20   And I figured it out.

00:18:21   I figured out what it was.

00:18:23   Uh, it was yet another corner of the language that, uh, the Swift language that I'm not familiar

00:18:28   enough with combined with a careless mistake.

00:18:30   I have a bunch of subclasses because that's what you do in AppKit.

00:18:33   You subclass things, uh, it is big cascade of subclasses for populating my NS table views.

00:18:38   And, uh, you guys familiar with the whole like designated initializer thing, you know, where

00:18:44   you got to call a designated initializer and you can have convenience initializers that are

00:18:48   what you want to do, but the, but you don't get to pick the designated initializers.

00:18:51   If you're using like, you know, some NS class that Apple defines, you got to call their

00:18:55   initializers.

00:18:55   Even if they're initialized with some crap that you don't care about.

00:18:57   Anyway, I have this big cascade of initializers and being a dutiful little object oriented

00:19:02   person, I was shoving the common functionality down into the base classes.

00:19:06   So I don't have to repeat it in every subclass.

00:19:08   Um, and one of the things that I had shoved down, uh, was setting a very important attribute,

00:19:15   the identifier for, uh, the object.

00:19:18   And it turns out one of my derived classes was calling through a sequence of inits that

00:19:25   never hit the init that set the identifier.

00:19:27   It was passed into the constructor and passed down, but it was like, I like, because you

00:19:31   have to call the designated initializer, like sooner than you think, or at least I was calling

00:19:34   it sooner than I, I thought I was, uh, I had skipped over that part.

00:19:38   So what ended up happening was one of my cells, just a dinky little cell was not getting its

00:19:43   identifier set, which meant that every time I needed one of those, it would make me a new

00:19:47   one.

00:19:47   Oh no.

00:19:48   And it was, it was, it was a constant, it was the one with the little eyeball, like the

00:19:52   preview thing.

00:19:52   Like it's literally the same thing every time there's no data.

00:19:55   Right.

00:19:55   It's, it's hard coded.

00:19:56   So it was really fast, but not as fast as not making it.

00:20:01   So that was it.

00:20:02   I, I, I, I, you know, I set the identifier because I'm already passing identifier.

00:20:07   It was there in the constructor.

00:20:09   I set the identifier and like the subclass in it, even though it's like duplication and

00:20:12   you know, don't repeat yourself.

00:20:14   Well, anyway, I set the initializer, uh, because it wasn't getting set because I wasn't going

00:20:17   through the right init path elsewhere.

00:20:19   Uh, and then all of a sudden the, uh, view based one was exactly the same as the cell

00:20:24   based one.

00:20:24   And I was very happy.

00:20:26   I celebrated and I hope I never have to re-implement that view ever again.

00:20:31   I threw away the NS cell based one, reverted to the view based one, did the two line fix.

00:20:36   Uh, and, and honestly, you can't really tell the difference unless you, unless you, unless

00:20:40   you AP test it, it looks exactly the same as it was before, but I know it's ever so slightly

00:20:44   better and that's what matters.

00:20:46   Uh, and, uh, the final thing on this topic, uh, a bunch of people are asking about web kit

00:20:50   performance and scrolling performance.

00:20:52   Um, and someone pointed me to, uh, a blog post about a website, uh, where someone wanted

00:21:00   to make a, uh, a web page where you can scroll through every UUID.

00:21:03   That's bananas.

00:21:05   Like every UUID.

00:21:07   Wait, how like just random, like iterating?

00:21:11   Yeah, just it's, well, the idea is a web page and you scroll it and the top is the first UUID

00:21:17   and at the bottom is the last one and in between are all the other possible UUIDs between those

00:21:20   values now, and this is like an example of like, well, how far can you push the whole

00:21:25   like, uh, you know, it doesn't, like we've mentioned this before, if you're recycling the

00:21:28   cells, it shouldn't matter how many things there are in the list, right?

00:21:32   The performance should be the same with 10, a thousand, a hundred thousand, a million,

00:21:35   a billion.

00:21:35   It should, the performance should be exactly the same all the time.

00:21:37   Right.

00:21:38   Uh, and this is kind of a demonstration that now I feel like they cheated because when you

00:21:42   go to the website, every UUID.com, you will see how they cheated.

00:21:47   It's not really scrolling it.

00:21:49   It's really just showing a fixed list of cells with values that change.

00:21:52   So, and it's a fake scroll bar.

00:21:54   So, uh, but anyway, the blog post about how they implemented it is fun because, uh, obviously

00:22:01   there's no sort of data, like you can generate all of these.

00:22:03   And I think they're essentially like sequential or whatever, but I thought it was interesting,

00:22:07   interesting enough to be in the show notes.

00:22:08   If you want to see one of the challenges of trying to have a, not infinite, but very,

00:22:15   very large scrolling list, every UUID.com.

00:22:18   With regard to the ask ATP, the one and only, I believe, ask ATP topic from last week, we were

00:22:26   talking about what should you look for if you're about to buy a house and you're a nerd like

00:22:30   us, uh, and we had talked about, you know, whether or not, uh, things would be pre-wired,

00:22:36   you know, like if there would already be ethernet in the house.

00:22:37   And John Hayden writes, my house had phone jacks in every room.

00:22:41   However, I looked behind and found that they were actually cat six, which is, you know,

00:22:45   ethernet cabling, but only using one or two twisted pairs, which is to say only a subset

00:22:49   of the wires within it.

00:22:50   They all routed outside the home, just like the coax.

00:22:53   I tried pulling the cables from the outside wall, but they were likely stapled down.

00:22:56   So I cut them and rerouted them to what is now my server closet.

00:22:59   Life lesson.

00:23:00   If you have phone jacks, see if you have an ethernet cable behind it.

00:23:03   This is like finding a, uh, an extra room in your house that you've never discovered before.

00:23:07   Like you didn't realize your whole house is wired with the internet.

00:23:09   You just remove one of the, uh, phone switches.

00:23:11   There's ethernet cable in here.

00:23:13   Uh, what a heartwarming story that will never happen to anybody else.

00:23:16   And confirm.

00:23:18   All right.

00:23:20   And, uh, I have some updates with regard to my grand notification project.

00:23:23   Uh, first of all, I got a ton of feedback.

00:23:26   This is one of those things where I thought I was very clear about this and either I wasn't

00:23:30   or, or maybe people just didn't hear.

00:23:33   Um, but one way or another, it doesn't really matter.

00:23:35   Uh, I already have the connection, if you will, between the garage door and home assistant

00:23:41   that's sorted that that was sorted last week.

00:23:44   It's been sorted since I went to home assistant.

00:23:46   Um, there is a, I was going to say bespoke.

00:23:49   I don't know if that's really fair, but there is a integration with the particular weirdo flavor

00:23:54   of garage door that I have, um, that I've been using since, uh, since, since I went to home

00:23:59   assistant, uh, a few months ago.

00:24:00   And, uh, one of the funny things about, if you look at an integration in the, uh, or on the

00:24:05   home assistant website, you can see how many people have it installed.

00:24:10   And the integration I'm talking about is the nice geo integration.

00:24:14   Uh, it was introduced in home assistant 2024.9 and it is used by gentlemen 36 active installations

00:24:22   of which I am one of them.

00:24:23   So not terribly popular, but it works.

00:24:26   Um, so I just wanted to make it clear.

00:24:28   A lot of people were talking about like a rat go.

00:24:30   Um, I don't recall what the acronym stands for, but I think it's like a rat GD or something

00:24:35   like that.

00:24:35   Anyways, uh, there, there are many, many mechanisms to get a dumb garage door opener into like home,

00:24:42   home kid or home assistant and what have you.

00:24:44   Uh, but that's already been solved for me.

00:24:45   Uh, also a lot of people brought up in this one, I am pretty sure I didn't say anything

00:24:49   about a lot of people brought up the ESP 32 as an alternative to like a Raspberry Pi or Arduino.

00:24:55   Uh, these are exceedingly cheap, uh, wifi enabled, uh, little programmable, uh, basically circuit

00:25:01   boards.

00:25:02   Uh, and they seem to be the popular way to do this sort of led kind of dance that I'm talking

00:25:09   about.

00:25:09   And in fact, there's a software project called ESP home, uh, that is allegedly really, really

00:25:17   good at doing led related stuff and then exposing that in home assistant.

00:25:21   Uh, I don't think I talked about either of these on the show last week, but I very justifiably

00:25:26   and I appreciate it.

00:25:27   Got a lot of feedback from people saying, this is what you need to do.

00:25:30   I also got a lot of people saying to me, just get an led strip, man.

00:25:36   And I was like, okay, I don't want a hundred LEDs.

00:25:41   I want three.

00:25:43   Why would I get an led strip of, you know, a meter, two meters, three meters, four meters,

00:25:48   10 meters, or what have you.

00:25:49   I don't want that.

00:25:50   I want three, not three meters.

00:25:53   Mind you, I want literally three LEDs.

00:25:55   And in a conversation on a Slack with Kiel Olson, he said to me, well, you can just cut it.

00:26:01   Wait, wait, I'm sorry.

00:26:03   What?

00:26:03   Yeah.

00:26:03   You can just cut an led strip.

00:26:05   If it's the right kind of strip, you can just cut it.

00:26:07   What?

00:26:08   So it turns out that there is a style of led strip and the most popular one is the WS2812B.

00:26:17   I'll put a link to the data sheet in the show notes.

00:26:19   I couldn't find a better link for like just the thing.

00:26:22   You can do an Amazon search or what have you, if you want to see it.

00:26:26   But suffice to say, these are addressable LEDs.

00:26:29   So imagine a strip of, say, 100 LEDs, and you can actually target any specific LED.

00:26:36   And the way it works, and I'm going to butcher the specifics, but the general gist of the way

00:26:39   it works is the first LED has a data connection and a power connection to whatever's powering

00:26:44   it, and it then daisy chains those connections to all the LEDs behind it.

00:26:52   So it will look at the data, say, is this about me?

00:26:56   Okay, then I'm going to turn myself on or off, or no, pass it along to the next one.

00:27:00   And if you cut them at the particular places where they allow you to physically cut them,

00:27:06   and they're usually labeled, I guess, you can just cut three LEDs off of a strip of 100.

00:27:10   I had no freaking clue this was a thing.

00:27:12   This is blowing my mind.

00:27:14   Now I feel like I want to just play with LED strips because I can.

00:27:17   I have no purpose really for them, but I just want to get some and play with them just

00:27:21   because I think that's incredible.

00:27:22   I'm going to combine this with Marco's holiday lights.

00:27:24   You can run those LED strips all over his house.

00:27:27   What was the name of that candy where you constantly eat paper because they stick to it?

00:27:31   What was those dots?

00:27:32   Oh, oh, God.

00:27:32   Dot candy?

00:27:33   No, dots are the chewy things in the box.

00:27:35   What the, ribbon candy?

00:27:36   No, no, no.

00:27:37   It was something dots.

00:27:38   I know what you're thinking of.

00:27:39   It was a row of like four or five.

00:27:40   Yeah, it was like a piece of paper and had hard little pieces of sugar stuck to it.

00:27:44   And the idea was that you would scrape the little hard pieces of sugar off the paper with

00:27:47   your teeth or your fingers, but you inevitably end up eating paper.

00:27:49   They were a terrible candy and I'm glad I can't remember their name.

00:27:52   Oh, they were the best, but they were also terrible.

00:27:54   You're not wrong about that.

00:27:55   If you like eating paper, they're great.

00:27:57   It was an activity and a candy in one.

00:28:00   Right.

00:28:01   Like a fun day.

00:28:02   Is the activity picking paper out of your teeth?

00:28:03   Is that the activity?

00:28:04   Yeah, pretty much.

00:28:04   Yeah, pretty much.

00:28:06   In any case, so that's a thing.

00:28:09   I had no idea that was a thing.

00:28:10   And I think everyone who was telling me, oh, just get a 2812, you know, yada, yada, yada.

00:28:15   They all just assumed that I knew that you could just cut off a slice of LEDs.

00:28:20   I had no idea until Kiel said something.

00:28:22   So now I know.

00:28:24   Yeah, I knew that you could cut LED light strips that like, you know, just were regular ones

00:28:28   that don't have like individually addressable, you know, components.

00:28:32   I figured, you know, because there's lots of LED light strips that are just like long strings

00:28:35   of LEDs and they have little markings on where you can cut them and it's fine.

00:28:39   Yeah, I had no idea.

00:28:40   Yeah, but I didn't realize that that extended it to the like addressable cool kind as well.

00:28:44   Yeah, isn't that neat?

00:28:45   Real-time follow-up, John, I think.

00:28:46   Candy buttons.

00:28:48   Candy buttons, candy dots, or, and this is a great one, pox.

00:28:51   Would you like some pox?

00:28:53   No, thank you.

00:28:53   No, I do not.

00:28:55   I would not like some pox.

00:28:55   I will pass on the pox.

00:28:58   A pox on both of your teeth with paper that's going to get shut.

00:29:00   Oh, man, that's good.

00:29:05   But I love the, I mean, everything you said is correct.

00:29:07   You constantly had paper coming with them, but God, I love them so much.

00:29:11   They were one of my favorites.

00:29:11   All right, just a little bit more with regard to the notification stuff.

00:29:14   I ran all this by the Historical Commission for the second time.

00:29:18   I had kind of done a casual flyby with the Historical Commission,

00:29:21   and I had forgotten that the Historical Commission had strong opinions about this.

00:29:27   And the Historical Commission has informed me that this is not going to be a project that

00:29:31   I will be pursuing any further, or at least not establishing it in the kitchen.

00:29:36   And it's too bad because, as I think I'd said to you last time, like there's, and I don't

00:29:40   want to share this photo publicly, but in our Slack, I shared with the boys, there's an empty

00:29:46   outlet, like almost at eye level, right in the kitchen, right where we pass by all the

00:29:51   time.

00:29:51   And it would have been a perfect spot for it.

00:29:53   And the Historical Commission has informed me this is not going to be passing.

00:29:57   My, not my permission slip, my application has been denied.

00:30:03   And most of my privileges might have been revoked.

00:30:05   That's to be determined.

00:30:06   Yeah, I think I would side with the Historical Commission on this one.

00:30:10   What actually went there?

00:30:11   Because it is literally a blank wall plate.

00:30:13   Like there's no, it's like the same wall plate you would see if there was an outlet or a

00:30:16   switch, but there's nothing.

00:30:17   Well, I just installed that blank.

00:30:19   What was there until literally a week or two ago, which kind of what started this whole process

00:30:23   was that used to be an RJ-11 jack, a telephone jack right there.

00:30:27   Oh, there you go.

00:30:28   That makes sense.

00:30:28   And, you know, the house was built in the late 90s.

00:30:30   And so you want the kitchen phone.

00:30:32   And that was, I mean, you can't see you guys.

00:30:33   Well, none of the listeners can see this photo and I apologize.

00:30:36   But the boys are looking at the photo and behind me, as I took the pictures, like the

00:30:40   kitchen and the kitchen table and whatnot.

00:30:41   But it's the perfect spot for this like notification system.

00:30:44   Unfortunately, the Historical Commission has denied my application and my building permit

00:30:48   has been denied with prejudice.

00:30:52   With prejudice.

00:30:53   That's what I was looking for.

00:30:54   Thank you.

00:30:55   That being said, right near this outlet, or this former RJ-11 outlet, is a three-gang set

00:31:05   of traditional light switches.

00:31:06   And one of them is the kitchen, one of them is the kitchen table, and one of them is the

00:31:10   hallway.

00:31:10   And a couple of people wrote in, Drew Stevens in particular wrote in and said, what about

00:31:17   a home seer, H-O-M-E-S-E-E-R, H-S-W-X-300.

00:31:23   All these names just roll right off the tongue.

00:31:25   Is that like a GPU?

00:31:26   Right, exactly.

00:31:27   It might as well be, because it's got RGBs.

00:31:29   It does, that's true.

00:31:30   So this is a, you know, like paddle or decorous style switch that has individually addressable

00:31:36   LEDs on the side of it.

00:31:37   And you can even change the colors of them.

00:31:38   This looks freaking perfect for what I want.

00:31:43   And what I can do, hypothetically, is I can replace one of the three switches in the three-gang

00:31:49   box that actually do things.

00:31:51   I can replace one of those switches, specifically the kitchen table switch, which not only is

00:31:55   in the center of this three-gang box, which obviously I can move it, but also is a single

00:31:59   switch rather than, you know, part of a multi-switch setup, a three-way setup or what have you.

00:32:05   Anyways, I could replace the kitchen table switch with one of these and it would be perfect.

00:32:10   And I am ready to buy this thing, money, no object, and I go to buy it and it's sold out on Amazon.

00:32:18   Aren't you getting a little ahead of yourself here?

00:32:20   Have we not learned nothing about the permitting process?

00:32:22   I know, I know.

00:32:23   But I thought maybe I could slide it in on the, you know, get it in on the slide.

00:32:26   I think this is going to be noticeable.

00:32:28   I don't think this is going to slide under the radar because like, especially in the pictures on the website, these LEDs are not subtle.

00:32:35   You're not wrong, but just hear me out though.

00:32:38   So here it is, I go to Amazon, I'm like, this is going to be freaking great.

00:32:41   And I don't remember what the cost was, but it was under $100, which granted for a switch is a lot of, well, you know, you can get a $3 like dumb switch.

00:32:48   And so $100 is a lot of money for a switch, but this would be perfect.

00:32:51   But it's sold out.

00:32:52   So I said, okay, fine, I'll go to the actual HomeSeer website.

00:32:55   I will buy one for, no, it's sold out there too.

00:32:57   So apparently I cannot get one of these and I'm really sad about it, even though I think you're right, John,

00:33:04   that ultimately the permitting process would deny me.

00:33:08   What was it with prejudice?

00:33:09   Thank you.

00:33:10   There it is.

00:33:11   You got to go through that process before you buy the switch.

00:33:13   I know.

00:33:14   And speaking of that, how do you think the terminal with no vowels is going to go over?

00:33:18   Has that been discussed?

00:33:19   Lightly discussed.

00:33:21   Marco did the thing where he bought it first and then slid it in and just waited to see if anyone would notice.

00:33:26   And I'm wondering if that strategy might not work in your household.

00:33:29   I don't know that it would.

00:33:30   It's been lightly discussed, but the thought of having a automatically updatable calendar was met with enthusiasm because I think I've talked about this in the past.

00:33:40   I print at the beginning of the month a physical calendar and put it on the refrigerator just because I like, both of us actually, like having a vague notion of what we're doing.

00:33:50   And we can always write stuff on there if we want.

00:33:52   The system of record is 100% our shared Apple calendar.

00:33:55   But having a glanceable thing in the kitchen is kind of nice.

00:33:58   Now, I don't think we'll be mounting the terminal on the refrigerator, although I guess we technically could since it's battery-powered.

00:34:04   I don't know.

00:34:04   We'll see what happens when we get there.

00:34:06   But one way or another, this is irrelevant because I can't put my hands on one of these HomeSeer HS-WX300s.

00:34:15   Then a ton of people recommended the InnoVeli Smart Dimmer.

00:34:19   And in fact, people are recommending that in the chat right now.

00:34:22   This is way more popular in terms of the number of recommendations I got.

00:34:26   However, it's not exactly what I want.

00:34:29   It's close, but it's not exactly there.

00:34:32   And it does have a light, like an LED light bar on the side of it.

00:34:35   It's a switch with a light bar on the side as opposed to individual LEDs.

00:34:38   And it wasn't until, well, Drew Stevens, who is the same one who recommended the HomeSeer, recommended this as well.

00:34:44   And then David Minima pointed out to me, no, no, no.

00:34:47   Yes, it's presented as one LED bar, but there's actually several addressable LEDs in there.

00:34:54   But the thing of it is, is that they're all in like one shroud or lens or what have you.

00:34:58   So I don't really love this.

00:34:59   Although a lot of people reached out and said the InnoVeli is very, very good.

00:35:03   So we'll see what happens.

00:35:05   Again, the Historical Commission, I think, has opinions about this.

00:35:07   And naturally, I want to appease the Historical Commission.

00:35:10   I recommend that path, yes.

00:35:13   So I think this is all for naught anyway.

00:35:15   One diffuser, David Schaub writes, and I think that's the correct term for it.

00:35:18   Thank you.

00:35:19   But as a couple of quick final notes, because I can't leave well enough alone and because my permitting has been denied, I thought, well, what's the next best thing?

00:35:30   Well, I can put this junk in my menu bar because why wouldn't I?

00:35:33   And I was looking at SwiftBar, which is my thing that will let me put random stuff in my menu bar, which I really, really love.

00:35:41   I've talked about it many times in the past.

00:35:43   And one of the advantages of putting all this data on a MQTT setup, a PubSub sort of setup, is that anything can subscribe to these, you know, basically, is it bad or is it good messages?

00:35:56   And the way SwiftBar works, though, is that it pings away, like every five seconds or 10 seconds or five minutes or 10 minutes or what have you, it makes another request and gets the latest version of the world.

00:36:08   And yeah, I could, like, just do this every second or two, but it seems so wasteful when the whole idea of MQTT is you say, I would like to get updated, and then it sends you updates.

00:36:18   Well, come to find out, and I didn't realize this until earlier today, SwiftBar actually has the idea of, shoot, I forgot the name of it, and I'm trying to stall for time while I look at it.

00:36:28   But it has the idea of streamable, there we go, streamable plugins, where instead of just running a script and then getting a result and walking away, it will actually start a new thread and run a script and wait for it to update standard out.

00:36:43   And when it updates standard out, it'll update your menu bar.

00:36:45   And it took me a little bit of time to figure out how to get this right, and I had to actually engage with the author of SwiftBar, who went above and beyond.

00:36:54   This is Alec Maznov, went above and beyond, and it seemed to install their own MQTT setup just to test my BS.

00:37:02   Incredible customer service, especially for a free app, but even in general, incredible customer service.

00:37:07   And anyways, between Alec and myself, we got it squared away.

00:37:11   So now, as MQTT messages come in, I will occasionally see an envelope appear in my menu bar if I need to check the mail.

00:37:19   I will see a little charging car in my menu bar if Aaron's car is charging.

00:37:23   And I will always see the state of the garage door because that's what I want to do, either open or closed.

00:37:27   And that makes me very happy.

00:37:28   And you can all judge me, and I don't care.

00:37:30   And then finally, finally, we talked about, hey, when I leave the house, I might take all these Caseta switches and so on and so forth.

00:37:38   And we had a brief conversation between the three of us as to whether or not I could.

00:37:42   And as per a handful of people, but I think the first one I saw was Mark Bramhill reached out and said, hey, according to their realtor, and obviously the rules may vary where you are, anything you show, and the phrase that Mark quoted from his realtor was, anything affixed to the wall has to stay.

00:37:57   So if you show the house with Caseta switches, guess what?

00:38:00   They're going to be the new owners, Caseta switches.

00:38:02   But if you rip them out before you start showing the house, fair game, baby.

00:38:06   So that makes sense, but I didn't know that, and I didn't have that summarized until Mark had told me about it.

00:38:11   So there you go.

00:38:12   Didn't some Germans also say that they take everything with them?

00:38:15   Oh, yes, that's right.

00:38:16   They take their kitchen appliances, their kitchen cabinets, and one of them said their kitchen countertops.

00:38:22   What good are the countertops going to do you unless every countertop is the same size and shape?

00:38:27   Like, you know, these are our marble countertops.

00:38:29   We're taking them with us.

00:38:30   Like, they're not going to fit in your new place at all unless it's exactly the same as your old.

00:38:34   But anyway.

00:38:35   I mean, you are talking about Germany.

00:38:36   Like, maybe literally all of their counters are exactly the same size and shape.

00:38:40   Like, there's no other place that I would think that's a possibility.

00:38:43   But in Germany, maybe.

00:38:44   I don't know.

00:38:45   Yeah, like, you get a new place, and the only thing in the kitchen is just, like, bare walls, bare floor, and, like, some, like, electrical wires dangling out somewhere.

00:38:52   Maybe, like, a loose gas line.

00:38:53   I don't know.

00:38:54   I don't know what's going on over there in Germany, but anyway.

00:38:56   This is bananas to Americans.

00:38:58   And obviously, in America, you know, you could write into a contract, you know, I'm taking the stove, or I'm taking this, or I'm taking that.

00:39:03   But generally speaking, normally kitchen fixtures, well, the fixtures particularly, like, cabinets and countertops and whatnot,

00:39:10   those always stay.

00:39:11   I've never heard of those going.

00:39:12   But even, like, stoves and ovens and, in a lot of cases, microwaves, if they're, you know, built-ins, all of those tend to stay.

00:39:19   The sellers usually don't want them.

00:39:21   The sellers usually say, please take our crappy old, you know, we're not taking our fridge with us.

00:39:26   We're not taking our stove.

00:39:27   They're yours if you want them.

00:39:28   Like, they're just because they assume they're going to get new stuff in their new place.

00:39:31   And, yeah.

00:39:31   But, anyway, different strokes.

00:39:33   Indeed.

00:39:34   All right.

00:39:35   So, there's been a big brouhaha over the last several days about Deep Seek.

00:39:40   And we're going to read, probably, I'll be reading quite a lot of different things, for better and for worse.

00:39:47   But Deep Seek is a new AI thing from this Chinese company that I don't think any, well, not literally, of course, but most Americans hadn't heard of.

00:39:57   I certainly hadn't heard of it.

00:39:58   And they released some stuff.

00:40:00   And we'll talk about what here in a second.

00:40:01   And it done shooketh the American stock markets and a lot of big tech here in America.

00:40:07   And a lot of big tech took a bath in the stock market over the last week.

00:40:12   So, reading from Ars Technica.

00:40:14   On Monday, NVIDIA's stock lost 17% amid worries over the rise of the Chinese AI company Deep Seek,

00:40:21   whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model,

00:40:30   and whose AI assistant app jumped to the top of the iPhone app store's free apps category over the weekend, overtaking chat GPT.

00:40:36   The drama started around January 20th when the Chinese AI startup Deep Seek announced R1,

00:40:42   a new simulated reasoning, or SR, model that it claimed could match OpenAI's O1 reasoning benchmarks.

00:40:50   There are three elements of the Deep Seek R1 that really shocked experts.

00:40:54   First, the Chinese startup appears to train the model for only about $6 million, that's American,

00:40:58   reportedly about 3% of the cost of training O1, and as a so-called, quote-unquote, side project,

00:41:06   while using less powerful NVIDIA H800 AI acceleration chips due to the U.S. export restrictions on cutting-edge GPUs.

00:41:14   Second, it appeared just four months after OpenAI announced O1 in September of 24.

00:41:19   And finally, and perhaps most importantly, Deep Seek released the model weights for free with an open MIT license,

00:41:24   meaning anyone can download it, run it, and fine-tune or modify it.

00:41:27   That's a lot.

00:41:28   I mean, I never would have thought NVIDIA's stock would ever go down again.

00:41:32   I just thought it would just go higher forever, and there was nothing about their stock price that was irrational or bubble-like.

00:41:38   But no.

00:41:38   I'm not dancing on NVIDIA's.

00:41:40   They're still the best GPU company out there.

00:41:42   But yeah, I think they were a little bit overinflated.

00:41:44   And companies taking a hit on this, you know, NVIDIA taking a hit.

00:41:48   NVIDIA taking a hit on this is a little weird, because as this story you just read alludes to,

00:41:53   there are, or were, I think.

00:41:55   I'm not sure if they're still in effect.

00:41:56   Who even knows?

00:41:57   But anyway, there were U.S. export restrictions for the most powerful GPUs to China,

00:42:02   and the idea being, let's not give our best technology to China,

00:42:08   because then they'll develop AI and, I don't know, take over the world with their AI instead of our AI.

00:42:12   Whatever.

00:42:12   It's just like, USA, USA, we want to do everything ourselves with our own technology.

00:42:17   We don't want to export it to China.

00:42:19   They can buy the crappy old, like, last year or year before our previous generation model,

00:42:24   but we're not going to sell them the best stuff.

00:42:26   And presumably that will help maintain the U.S. lead in AI.

00:42:30   Because OpenAI is an American company, and a lot of the other big AI startups are also American companies.

00:42:35   And the Chinese company said,

00:42:37   no, we'll do the same thing you're doing, but for less money and with crappier hardware.

00:42:42   And they did.

00:42:42   And it was very upsetting to the stock market because they said, well, I guess all those export restrictions did not have the intended effect.

00:42:52   And I guess what OpenAI is doing is not that unique.

00:42:56   We've talked about this in many past episodes.

00:42:58   The phrase you will hear all the time, which is annoying, is what kind of moat do the AI companies have?

00:43:05   Is there anything about what OpenAI is doing that makes it special and unique, that makes competitors not able to compete?

00:43:12   And I think we've all said in all past shows, not really, because Facebook has its open source llama models.

00:43:18   Apple's got its foundation models.

00:43:20   Like, the foundation of all these things is the large language model scientific papers and the study of how to create them.

00:43:28   That is all public knowledge.

00:43:29   So anybody can make one of these things.

00:43:32   And the question was, is there some kind of special sauce that OpenAI has?

00:43:36   Okay, well, the technology, everybody knows, but we do it in a better way than anyone else.

00:43:40   And therefore, we have a moat.

00:43:43   We're the best at it.

00:43:44   ChatGPT is the best.

00:43:46   Everyone's got a large line of models, but we're just a little bit better than all of them.

00:43:49   And that's why we need $500 billion or whatever to build new data centers to train the next model, blah, blah, blah, blah.

00:43:55   And here comes this Chinese company saying, well, we read all the same papers, and we have crappier GPUs, and we spent less money, but our thing is basically as good as yours, OpenAI.

00:44:04   So what do you think of that?

00:44:05   Not only that, but like, you know, running inference on our thing, which is like, you know, executing the AI models and using them for everybody else, is way cheaper than your thing.

00:44:13   Everything's cheaper.

00:44:14   Everything about it is cheap.

00:44:15   It was cheaper to train, and it's cheaper to run, to actually use.

00:44:19   And that's one of the reasons that one of these stock prices that did not take a hit was Apple, because I guess the theory that like, well, if inference becomes cheaper and Apple likes to do lots of on-device AI, that's good for Apple.

00:44:31   Now, it's not like Apple is using DeepSeq, like in their operating system, but just conceptually, if the cost of inference goes down for equal performance, I guess that benefits Apple because they're doing a lot of inference on device or whatever.

00:44:45   But we'll see.

00:44:46   I think like this whole kerfuffle is just kind of, I feel like, a correction to some inflated stock prices.

00:44:51   But in general, being able to do the thing better and for less money with less power is what we expect with technological progress.

00:44:58   What we don't expect is like every year it will take even more power and, you know, like we think things to get better.

00:45:05   But keep in mind, the DeepSeq is not like massively better than OpenAI.

00:45:09   It's roughly about the same with some caveats that we'll get to in a little bit.

00:45:13   But the whole point is, yeah, it's the same, but cheaper and better and lower power and blah, blah, blah.

00:45:18   Right.

00:45:18   And I'm like, great.

00:45:19   That's what I expect.

00:45:20   I expect like, you know, the MacBook Air that you can get now should be roughly the same performance as like an old MacBook Pro.

00:45:29   Right.

00:45:29   But lower power and better, like I expect that to happen.

00:45:32   But yes, people were startled that it happened so quickly, especially since OpenAI has always just been making noises of like the only way we can surpass a one to make the next generation is for you to give us billions more dollars.

00:45:42   And yeah, apparently even just to do a one caliber stuff, you did not need that much money.

00:45:47   You just need to be a little bit more clever.

00:45:49   And the fun thing about the cleverness, which we'll get to in a little bit, is kind of like the saying that like constraints lead to better creative output.

00:45:58   Because this Chinese company had to work with previous generation hardware, they were forced to figure out how to extract the maximum performance from this older hardware.

00:46:07   They had to make compromises.

00:46:09   They had to do approximations.

00:46:10   They had to come up with new technologies that said we have we can't do it the way OpenAI did it.

00:46:14   We don't have the money.

00:46:15   We don't have the time.

00:46:16   We don't have the technology.

00:46:17   We need to find out a way to essentially get the same result, but doing way less work.

00:46:24   And apparently they did.

00:46:26   And I think, you know, we've been in a pretty long span of technology, you know, companies and technology stocks and technology earnings and profits being pretty mature until very until, you know, the big LLM and AI boom of the last couple of years.

00:46:44   And I think it was easy for us to assume that the technology industry is stable now.

00:46:53   But in the past, there have been periods where the tech industry was not so stable.

00:46:58   There were big boom times.

00:47:00   There were big technological changes.

00:47:01   Obviously, like, you know, the birth of the personal computer was a pretty big deal, shook a lot of stuff up.

00:47:06   You know, then, you know, later on, the Internet for home users really shook a lot of stuff up.

00:47:11   You know, the consumer web came on with, you know, e-commerce.

00:47:15   That shook a lot of stuff up.

00:47:17   And, you know, then mobile happened.

00:47:18   And that shook a lot of stuff up.

00:47:20   And in between, though, there was periods of stability.

00:47:22   Like, there was, like, a seemingly long period of stability where it was, like, Windows desktop PCs running in Dell CPUs.

00:47:27   And then that same stability period, but with the Internet.

00:47:30   And then mobile came.

00:47:31   And so there's always these kind of periods where you're like, yes, this is just the way computers are.

00:47:34   You buy a personal computer.

00:47:35   You put things floppy disks on them.

00:47:37   And you run them.

00:47:37   And that's all there is.

00:47:38   And it seems like that's going to be it.

00:47:41   But then the next inflection point comes and there's chaos and there's winners and losers.

00:47:44   And then there's another stable period.

00:47:45   And so, yeah, I think we were in a pretty long stable period with we were currently in the PCs exist, mobile exists, the Internet exists.

00:47:52   And then we're in this kind of mostly stable period.

00:47:55   And then LLMs came and said, now nobody knows what's up.

00:47:58   Everyone's scrambling.

00:47:59   Who are going to be the winners?

00:47:59   Who are going to be the losers?

00:48:00   And, yeah, we're in the middle of that right now.

00:48:02   Yeah.

00:48:02   And I think, you know, the LLM-based AI phase.

00:48:07   I mean, I guess we're just calling it AI now.

00:48:08   So I'm going to stop saying LLMs even though it's more correct.

00:48:11   But whatever.

00:48:11   So the AI era is upon us now.

00:48:14   And all these things that have been stable are suddenly not so stable.

00:48:19   It does create bubble dynamics.

00:48:21   Bubbles do burst.

00:48:22   It does create a bunch of volatility in every market that is touched by it, which is, in this case, many markets.

00:48:28   So we have to assume, like, you know, I mean, look, geez, like Google now has disruption to their core search product for the first time ever.

00:48:38   Like, in their entire existence, they have, like, more disruption and more threat to Google search than we've ever had before.

00:48:46   You know, this is a big deal.

00:48:48   And, in fact, I mean, you know, I'll leave Tim Cook alone for this episode for the most part.

00:48:54   But, you know, I do think we will look back on this time and say Apple was really behind on LLMs.

00:49:02   And, you know, they spent their time making a car and a Vision Pro and while everyone else was doing this.

00:49:09   And, you know, they are behind here.

00:49:12   And I really hope they catch up.

00:49:13   And it's not a bigger deal.

00:49:14   But we'll see how that goes.

00:49:16   Well, there's a question of whether them being behind is an advantage or disadvantage, though.

00:49:19   Like, the reason their stock price is up is, like, this is further evidence that LLM technology, that nobody really has a moat.

00:49:26   That even if you are the best at making these AI things, there's nothing you're doing that someone else can't also do because everything you're doing is essentially based on technology and techniques that everyone understands.

00:49:38   You know what I mean?

00:49:39   And the reason people think Apple has a moat is because Apple's just making, like, computers that run software.

00:49:45   Like, there's no mystery about what they do.

00:49:48   It's computer chips.

00:49:49   It's software.

00:49:49   It's hardware.

00:49:50   All right.

00:49:51   Apple's moat is, yeah, anyone can do this.

00:49:54   But we're the only ones who know how to do it with taste, with style, with the right feature set, with, you know, like, all the Apple sort of more intangible things.

00:50:03   There's nothing, technologically speaking, even in the Apple Silicon era, really, that it's like, well, nobody else could do this except for Apple.

00:50:10   They have a secret sauce.

00:50:11   Their secret sauce is how they combine the ingredients.

00:50:14   So, to stretch the cooking analogy here, right?

00:50:16   Everyone knows.

00:50:16   Everyone's got the recipe, right?

00:50:18   It's just a question of, can you put it all together, right?

00:50:22   That's Apple's moat.

00:50:24   Like, that's why Apple's been so successful for so long.

00:50:26   It's not like they have a secret technology that nobody knows about, right?

00:50:30   And it's the same thing with the AI companies.

00:50:32   And OpenAI, I think, kind of felt like it was the Apple of AI.

00:50:35   It's like, well, yeah, everybody knows how to make an LLM.

00:50:37   And we publish all these papers to say how we're doing it.

00:50:39   And, like, we know how to train them.

00:50:41   We do all these different techniques.

00:50:42   And we talk about it.

00:50:42   But, like, we're so good at it.

00:50:44   We're the Apple of AI.

00:50:45   And we can combine things in a way that no one else can copy.

00:50:48   No one else can make a product that is a substitute for ours.

00:50:51   And DeepSeek said, well, I think we can do that.

00:50:54   And so far, the public is like, maybe it's just a fad and people are trying it.

00:50:59   And we'll get to some reasons why you might not want to do that in a little bit.

00:51:02   But, like, practically speaking, it's like, is this a substitute for the other thing?

00:51:06   Does it do what the other thing did, right?

00:51:08   And when it comes to personal computers, a lot of people say, well, a Windows PC kind of technically

00:51:12   does the same thing as a Mac, but not really.

00:51:14   Android phone versus iPhone.

00:51:15   There's still that differentiation.

00:51:17   Apple still has a moat.

00:51:19   And it's been a sustainable moat for a long time, basically made of intangibles.

00:51:24   And I don't think OpenAI has that kind of a moat where everyone has the same technology,

00:51:30   but they just do it a little bit special.

00:51:31   But, like, down to the, you know, this is the Samsung effect.

00:51:35   But if you go to the DeepSeek website, you could be forgiven if you squint your eyes and think,

00:51:39   is this the ChatGPT website?

00:51:41   It's a Samsung-style ripoff.

00:51:44   Like, it looks so much like ChatGPT.com.

00:51:47   But it's not.

00:51:48   The icons are slightly different.

00:51:50   And the app is similar, right?

00:51:52   So, I think in this area where Apple is kind of, like, behind, it's like, look, I feel,

00:51:56   I think Apple feels, if people are talking about what is the number one app on our store,

00:52:03   we're still winning.

00:52:03   Like, we don't need to have the number one app.

00:52:06   We just need ChatGPT and DeepSeek and whatever competitor we've never heard of to be duking

00:52:10   it out on our platform.

00:52:11   That shows we're still in the game here.

00:52:13   And we'll just wait and see which LM is the most successful.

00:52:18   and we'll partner with them and we'll leverage their technology and we'll work on our own.

00:52:22   And there's still out there, which I keep mentioning every time we talk about LLMs and AI,

00:52:27   the question of how useful this technology is and in what context.

00:52:32   We are currently in the phase, the chaos bubble phase, where it's like,

00:52:36   this technology is good for everything and should be used everywhere.

00:52:38   That is not going to be true.

00:52:40   It's only going to be useful in some places for some things.

00:52:43   But right now, everyone is trying everything.

00:52:45   And we'll find out where is it useful and where is it not.

00:52:48   With what Apple has done in Apple intelligence, I don't think they've done anything where

00:52:53   you look at it and say, I can't live without this AI feature now.

00:52:57   It has made such a big change in my life.

00:52:59   I'm glad it's there.

00:53:00   I will be sad if it ever goes away.

00:53:02   I don't think, you know, like, where is the utility?

00:53:05   And has Apple harnessed that in its operating system?

00:53:07   I don't think they have yet, right?

00:53:09   But they got to keep trying.

00:53:10   And same thing with everybody else.

00:53:11   Everyone else is trying.

00:53:12   So I think Apple's in a good position right now.

00:53:14   And until and unless someone out there sort of tries to usurp Apple's sort of platform control,

00:53:21   I think Apple's fine content to just keep trying different approaches to mixing AI into its platform

00:53:27   and wait to see who's left standing at the end of all this.

00:53:32   Is it going to be OpenAI, NVIDIA, DeepSeek, Anthropix, some other new company we've never heard of?

00:53:37   Like, who wants to partner with us later?

00:53:40   Well, you know, you guys duke it out and we'll just figure out who has the best.

00:53:43   Maybe we'll buy one of the ones that has the best one or whatever.

00:53:45   It's a risky move because I don't think Google's thinking that because Google's thinking these people are a direct threat.

00:53:50   But Apple's like, hmm, we can wait and see, work on our models and just keep trying to integrate it into our apps and see if anything sticks.

00:53:56   I don't know.

00:53:57   I think the dynamics are a little worse than that for Apple.

00:54:01   I mean, they do have advantages in the sense that obviously they have infinite money.

00:54:04   And so if they do end up needing to buy somebody at the end of the day, they can.

00:54:07   I don't think that's their style in this kind of scale, but they could if they really had to.

00:54:12   But I think the bigger challenge is like Apple's whole thing about, you know, owning and controlling core technologies for their products.

00:54:19   There's obviously a huge role now and in the future for LLMs and AI type models being core technologies of their products.

00:54:29   Is it a core technology of their product?

00:54:31   Yes, of course.

00:54:32   I mean, look at how many features are going to be based on it.

00:54:34   Is it now?

00:54:35   You think it's a core technology of their product now?

00:54:37   It's a core technology of their core part of their marketing.

00:54:41   I don't think it's a core part.

00:54:42   You could turn off Apple intelligence on people's iPhones and see how long it takes them to notice.

00:54:46   Like it is not a core technology in the same way as like Apple Silicon or their operating system or their app store.

00:54:52   Like, I mean, I'm not saying it's not going to be.

00:54:55   But right now, I feel like Apple intelligence, if you had to say, does this fit the Tim Cook doctrine of we need to own and control, blah, blah, blah.

00:55:01   I'm not sure it does yet.

00:55:03   I mean, it sure seems like it's going to.

00:55:05   So it's a good idea for them to be pursuing that.

00:55:07   But right now, I don't think they've proven it.

00:55:10   I think it's rapidly becoming an assumed feature on computing platforms in various contexts.

00:55:17   And it's only going – I mean, look, LLMs have only really been a thing in the consumer world for like two years.

00:55:22   They're still brand new and they're already – like people are expecting chat GPT-like functionality all over the place.

00:55:27   Hell, look, it's Siri.

00:55:28   I mean, but even just going to things like improving the quality of dictation or, you know, of text-to-speech and speech-to-text and, you know, image recognition of things.

00:55:38   Is it improving that?

00:55:39   Well, it can in other platforms.

00:55:41   It does, I think.

00:55:42   Not in Apple's hands apparently, but other people you can use it.

00:55:44   Well, right, and so, you know, the other things that Apple has historically been kind of bad at that are kind of, you know, big data or big infrastructure problems, things like search indexes.

00:55:54   You know, Apple is not a great search company in many ways.

00:55:58   You know, they kind of have their own search stuff going on in a few places.

00:56:02   They're not great at search.

00:56:04   They're not great at running consumer-facing web services.

00:56:08   That's just not one of the things they are really good at.

00:56:11   They're able to avoid those things being a problem for them in many ways.

00:56:16   Maybe a bigger example is voice assistants.

00:56:20   Apple is not good at making voice assistants.

00:56:23   But they've been able to get by in part because of the massive lock-in they have that, like, you can't make a competing voice assistant on iOS.

00:56:33   You just can't.

00:56:34   And also in part because it turned out voice assistants were not that important.

00:56:40   They were not that big of a threat.

00:56:41   If yours sucks as much as Siri has sucked for its entire life, it doesn't make people not buy your product.

00:56:47   Well, LLM and LLM-based features are, I think, going to be somewhere on that spectrum.

00:56:53   We don't know where yet.

00:56:54   It's possible that it's going to be really important that people will start assuming these features will be there and will work better than they do on Apple's platforms.

00:57:03   And if Apple never takes this more seriously and develops more culture and engineering and infrastructure around this, the way they never got into web services and never got into voice assistants very well, if they miss on this, it might be more important to their customers.

00:57:22   We don't know.

00:57:23   They will still have the lockout problem with locking out any competitors, which I think in their case will actually hurt them a little bit here because that will just make the iPhone work worse for iPhone customers in these ways.

00:57:38   I don't know.

00:57:40   I think this kind of shakeup in technology, we have seen this dramatically disrupt really established competitors.

00:57:49   We've seen – like, look, when the iPhone first came out, it kind of sucked at a lot of things.

00:57:54   But we loved it and we used it anyway.

00:57:56   And eventually, it stopped sucking at those things and we all loved them.

00:58:00   And during that time, between sucking and not sucking at a lot of things, a lot of people talked a lot about iPhones on PCs and Windows PCs even.

00:58:12   Many people talked about their iPhone using their Windows PC and many other people talked about on their Windows PC, we're fine, we have 90% of the market, what's the problem here?

00:58:22   And then phones massively disrupted the entire computer industry and Microsoft was screwed because they weren't taking mobile seriously enough.

00:58:30   That can happen to Apple.

00:58:32   They seem impossibly big and established at this moment.

00:58:36   But there's this huge area of technology that's disrupting a lot of things and that has pretty big promise for the future that Apple has shown no core competency in and not much competitiveness, not really taking it seriously.

00:58:51   Clearly, they were caught off guard.

00:58:53   Clearly, they started their AI efforts way later than everybody else.

00:58:57   Clearly, they are way behind.

00:58:59   And they don't seem to have that kind of talent in the company at anywhere near the levels that their competitors do.

00:59:05   So, I think Apple is extremely vulnerable to disruption from AI and I don't think they're taking it seriously enough.

00:59:12   I don't think they are prepared.

00:59:13   I don't think they started early enough.

00:59:15   And we'll see if they can recover.

00:59:17   But so far, with what we've seen so far from them, I don't see any reason to be optimistic on this.

00:59:24   Maybe we'll see it in the future.

00:59:26   Maybe they'll pull out of this nosedive that they seem to be in with AI and actually, you know, finally get their footing and kind of take off.

00:59:34   I hope so.

00:59:34   I know I'm mixing a lot of metaphors there.

00:59:36   I hope they take off.

00:59:37   And I hope they can actually take this way more seriously than they appear to be taking it so far.

00:59:44   Because they could be in a great space.

00:59:46   Like, hardware-wise, they're in a great place to run inference on their devices because their devices have all this memory the GPU can use.

00:59:53   So, they're in a great place there.

00:59:54   And, of course, they have the neural engines and everything.

00:59:57   Like, they make all their custom silicon.

00:59:59   Like, Apple's in a great place hardware-wise for the devices to run inference.

01:00:04   Where they seem to be way behind is just in the other areas, the software, the models.

01:00:10   And that's what I have concerns about, that it seems like this huge opportunity for disruption is aimed right at them.

01:00:19   And they don't seem prepared or necessarily taking it seriously.

01:00:23   You know, Apple, among their problems, you know, again, we like Apple a lot.

01:00:28   They make a lot of great stuff.

01:00:29   But they also have debilitating hubris.

01:00:33   And I don't know if they know how much they are under threat by this in the future, potentially.

01:00:39   I think they think they're in a really good spot.

01:00:42   I think they think Apple intelligence is great because why else would they have called it Apple intelligence and taken the huge risk of putting their brand name on it like that?

01:00:50   I think they are super confident that they'll be fine.

01:00:54   And I don't know that they're going to be fine.

01:00:56   Like, in the short term, sure.

01:00:58   In the long term, this could be like Bomber Missing Mobile.

01:01:02   We don't know yet.

01:01:03   But that's the potential.

01:01:04   That is what this has the potential to do.

01:01:07   And so I hope they are taking it that seriously.

01:01:10   We have not yet seen signs that they are.

01:01:14   I think they're taking it very seriously.

01:01:15   Like, they're making a big push for it.

01:01:16   But like the history you mentioned, like, well, first of all, voice assistants, that is obviously the area where they're farthest behind.

01:01:21   They were far behind that before LLMs.

01:01:23   Like, we all know that, right?

01:01:24   There was they were just not doing well with that.

01:01:26   And somehow able to limp along with Siri being terrible for all those years.

01:01:31   They also don't make a search engine.

01:01:32   They've just been essentially leaning on Google and other companies because that is apparently not core enough part of their operating system.

01:01:37   With the AI stuff, Apple feels like they need to incorporate it because it is a potential disrupting threat.

01:01:42   Absolutely.

01:01:43   But potential.

01:01:44   Keyword being potential.

01:01:45   But like, you know, you got to cover your bets.

01:01:46   However many years ago they decided we're just going to go on on this Apple intelligence thing.

01:01:50   They probably waited too long.

01:01:52   They haven't.

01:01:52   It's been taking them forever to do what they said they were going to do, right?

01:01:56   So it's going slowly.

01:01:58   And the things they're rolling out are not that impressive.

01:02:00   But I feel like they are really bought into it.

01:02:02   Like, every aspect of the company is focused on this.

01:02:05   They just haven't figured out how to do anything that's particularly compelling with it.

01:02:11   And meanwhile, their voice assistant is sitting there still sucking, right?

01:02:14   And the competition, the competing voice assistants are getting better and better because LLM is helping them.

01:02:19   And LLMs are not really helping Apple in its traditional weaknesses.

01:02:23   But, you know, they existed a long time with a crappy voice assistant.

01:02:27   But they can exist with a crappy one for a while longer as long as those voice assistants that everyone else is doing don't get much, much better.

01:02:37   Now, obviously, they're already better than Siri.

01:02:39   But, like, there's this breaking point.

01:02:41   Like, there's, you know, how much better is one cell phone than another?

01:02:44   The iPhone was across that breaking point of, like, this isn't just a little bit better than your Nokia candy bar phone.

01:02:50   This is a whole different thing.

01:02:51   Arguably, we're kind of already there with voice assistants.

01:02:54   But maybe not because the whole LLMs reliability problem.

01:02:59   But anyway, if someone figures this out, yeah, it's good for Apple to be in the game.

01:03:04   They're trying to get in the game.

01:03:05   I think they are serious about it.

01:03:06   I think they're spending a lot of time and money to the detriment of all the other things they could be doing to try to put Apple intelligence everywhere, to try to get better at it.

01:03:14   But I'm not optimistic because I, you know, like, I'm pessimistic not because I think they're not putting in the effort.

01:03:21   I'm pessimistic because it doesn't seem like a thing, like you said, that they've historically been good at.

01:03:25   And so no matter how much effort they put in, it's like, well, you can be really serious about this and put a lot of effort into it.

01:03:31   But if you are not able to acquire those skills, will you ever be fruitful?

01:03:36   But yeah, we'll see how this goes.

01:03:38   Like, there's like I'm I totally think they're doing the right thing.

01:03:42   It is a potential threat and you shouldn't wait for it to be a life threatening thing before you get serious about it.

01:03:48   You can't afford to wait, which is why everybody is scrambling to do everything, because they're like, well, it might be huge.

01:03:54   Look how much it's advanced in the past two years.

01:03:57   So we better got it on this ASAP, but right now it isn't, but it might be.

01:04:02   So we better do everything that we can.

01:04:05   And it's kind of sad seeing Apple flail with Apple intelligence because it's like they're trying to do stuff.

01:04:12   But what they're producing is like, I don't know, it's not compelling.

01:04:16   Like they did the partnership with ChatGPT to say, like, we don't have that.

01:04:18   We're not going to have that.

01:04:19   We'll partner with them.

01:04:20   And somehow they found a way to screw that up, like with Gruber's story where they're trying to ask when the Super Bowls are.

01:04:26   And when Apple asked ChatGPT, it still gets the wrong answer.

01:04:29   But if you ask ChatGPT directly, it gets the right answer.

01:04:31   So they've somehow partnered with ChatGPT and made it less functional.

01:04:34   Is that the Apple touch?

01:04:37   I don't know.

01:04:37   Well, I mean, and it could end up, look, it could end up being something like Siri where Apple is just, you know, limping along in mediocrity forever, buoyed by their own lock-in that they have on their platforms.

01:04:50   Yeah, they have enough other advantages that they can make up for it.

01:04:53   You know, they can have something like this ChatGPT thing where they're integrating somebody else's.

01:04:56   But I see that going the direction of Maps.

01:05:00   The iPhone used to have Google Maps built into its Maps app.

01:05:05   It wasn't a separate Google Maps app.

01:05:06   It was just the Maps app on the phone, and it was Google Maps behind it.

01:05:10   And then we had the Apple Maps fiasco back when those companies kind of, you know, split that relationship up for lots of pretty good reasons.

01:05:17   And Apple, you know, they needed Maps as a core feature of the phone, but it took them, what, a decade before their version of Maps was actually decent?

01:05:29   And for most of that time, they suffered because, you know, their Maps kind of sucked.

01:05:34   At least you could run Google Maps on the phone, and that helped in many, many ways.

01:05:39   But if that's what their LLM efforts end up looking like, where they're okay now kind of, you know, backfilling their capabilities with ChatGPT, but they're going to have to use their own model.

01:05:51   Like, that's not going to be a long-term solution.

01:05:54   What if it's like search, though?

01:05:55   What if OpenAI pays them $20 billion a year to make OpenAI the default voice assistant on Apple?

01:06:02   You know, Apple's forced to open the voice assistant thing to third parties because of the EU.

01:06:05   And then OpenAI doesn't have its own platform.

01:06:08   Like, again, I think Apple just loves the fact that people care what's number one in the App Store still.

01:06:12   Like, that's the magic of the App Store.

01:06:14   Like, all these apps want to be on the iPhone, right?

01:06:17   So it might be like Maps.

01:06:19   And part of the thing that made Maps come to a head was that Google demanded access to customer data that Apple wasn't willing to give, right, in exchange for continuing the deal.

01:06:27   So they went off and did their own thing, and it was painful and long.

01:06:29   Or it could be like search, where Apple is never going to be good at it.

01:06:33   And they say, you know what, you should pay us for you to be the default voice assistant on iOS.

01:06:39   And they're getting suddenly, you know, $20 billion a year from OpenAI or from DeepSeek or who knows.

01:06:44   But anyway, yeah, it's still, that's the thing about the current situation.

01:06:47   We don't know which direction this is going to go in.

01:06:49   Those are two possible directions.

01:06:50   And there's third and fourth directions we're not even thinking about.

01:06:53   But everyone is scrambling to try to do everything they can to figure out wherever this goes, we got to be ready.

01:06:58   And I think Apple is, they're showing that they have been able to kind of rally the troops to do Apple intelligence everywhere.

01:07:07   But they're also showing that their actual execution of that has been not impressive and way slower than I think we all thought it was going to be.

01:07:15   Yeah.

01:07:15   Bring it back to DeepSeek, though.

01:07:17   This is a song about DeepSeek.

01:07:19   What this disruption was with DeepSeek coming on the scene and showing this huge reduction in cost.

01:07:25   This is just what computer development looks like.

01:07:30   Like, you know, we find when we have, like, you know, new software areas, we do things a certain way and then people find optimizations.

01:07:40   And one of the most delightful things about software development is that when you find an optimization, oftentimes it's like, oh, this is now a hundred times faster or more.

01:07:50   Like, it can be dramatically better or dramatically faster or dramatically smaller.

01:07:54   You know, finding new types of compression or, you know, faster algorithms that can, you know, reduce the order of magnitude of a function.

01:08:02   Like, stuff like that.

01:08:04   That's just how computers go.

01:08:05   So what's interesting about this DeepSeek thing is that, you know, this is an area where, you know, AI model training and model inference are just so unbelievably inefficient in terms of resources used.

01:08:18   Like, the amount of computing power and, you know, just hardware and electrical power and everything, the amount of grunt of resource usage needed to make an LLM do anything or to train an LLM in the first place is so unbelievably massive that when we find optimizations like this, it shakes the entire market.

01:08:42   And I don't think we've had anything like that in computing for a very long time, like, where just the normal process of software maturation and software advancement, you know, of occasionally finding giant optimizations like this.

01:09:00   We haven't seen that on a scale where it's like, oh, this now affects billions of dollars of hardware that's been bought.

01:09:07   One example that I think it might have been Ben Thompson that gave us an example, and we're going to get to him in a second because he's the next item up in here.

01:09:13   But I think this example is from him, and I think it's a good one.

01:09:15   The disruption in data centers when Google said instead of buying, like, you know, servers from Sun or whatever, these big expensive Unix workstations, we're going to deploy commodity sort of PC style server hardware and manage that crappy commodity hardware with software.

01:09:41   And that destroyed the entire industry of really expensive proprietary Unix things for data centers that the entire internet was built on up to that point, because Google said, yeah, we found a better, cheaper way to do data centers.

01:09:55   Data centers are important.

01:09:57   People, if you wanted to build a data center at the scale that Google needs and you wanted to, you know, buy hardware from Sun or HP or whatever to put in there with these really expensive, you know, workstation class, server class things or whatever, that would cost way too much.

01:10:12   So how about we just take crappy hardware and a huge amount of it and have some really cool software layer on top that manages the fact that all this stuff is crappy and cheap and underpowered and it's going to break.

01:10:24   And that destroyed the whole industry.

01:10:26   All those companies are like half the companies are don't even exist anymore because what Google did showed that you could do the same thing that everybody needs to do that used to cost huge amounts of money and power and you could do it cheaper and better with a slightly different approach.

01:10:41   And that was an optimization they made.

01:10:43   So, yeah, like, I mean, this is, this is not as severe as that because what they've done is basically just a really good job of programming the hardware they had.

01:10:52   Anyway, we should, we should go to this next item because it goes into more detail about the particular innovations they, they made.

01:10:56   Hi, I'm back.

01:10:59   All right, so Ben Thompson in front of the show did, I believe this is a non-paywalled post, which he called DeepSeq FAQ.

01:11:08   And honestly, it's worth reading.

01:11:11   I could sit here and read the whole damn thing, but it would take a while.

01:11:14   So I'm just going to read the snippets that John has thankfully curated for us.

01:11:17   There is a lot here.

01:11:19   So, gentlemen, please interrupt when you're ready.

01:11:21   The DeepSeq V2 model introduced two important breakthroughs, DeepSeq MOE and DeepSeq MLA.

01:11:28   The MOE in DeepSeq MOE refers to mixture of experts.

01:11:32   Some models, like GPT 3.5, activate the entire model during both training and inference.

01:11:37   It turns out, however, that not every part of the model is necessary for the topic at hand.

01:11:41   MOE splits the model into multiple quote-unquote experts and only activates the ones that are necessary.

01:11:46   GPT 4 was an MOE model that was believed to have 16 experts with approximately 110 billion parameters each.

01:11:53   DeepSeq MLA, multi-head latent attention is the MLA there, was an even bigger breakthrough.

01:12:00   One of the biggest limitations on inference is the sheer amount of memory required.

01:12:03   You both need to load the model into memory and also load the entire context window.

01:12:07   Context windows are particularly expensive in terms of memory, as every token requires both a key and a corresponding value.

01:12:14   DeepSeq MLA makes it possible to compress the key value store, dramatically decreasing the memory usage during inference.

01:12:20   So, with regard to costs from the DeepSeq V3 paper, which we will link,

01:12:26   note that the aforementioned costs include only the official training of DeepSeq V3,

01:12:31   excluding the costs associated with prior research and ablation, is that right,

01:12:36   experiments on architectures, algorithms, or data.

01:12:39   Yeah, so these innovations they had, like, again, some of the innovations are things that OpenAI was already doing with GPT,

01:12:44   and they're doing as well.

01:12:45   And then the other thing is, you know, if you look at the paper, it's like, oh, well, you have a bunch of data,

01:12:50   and it's very expensive.

01:12:51   What if you compressed it?

01:12:52   I mean, it's not rocket science, but that's how innovation goes here.

01:12:55   Like, we, you know, let's take the approach that they did with GPT 4 and do that same thing to reduce our footprint,

01:13:02   and let's reduce it further by compressing this thing that used to take up a lot of memory.

01:13:05   And on the cost front, these are, by the way, these are papers are not,

01:13:09   they're not about the R1 model that we're talking about.

01:13:11   These are precursors to that.

01:13:12   So, if you had been paying attention to this stuff a month or two ago when they put this stuff out,

01:13:16   you could have seen that, like, what was coming.

01:13:18   But on the cost thing, they said, oh, it only costs us $6 million.

01:13:20   When they originally said that, a lot of people just didn't believe them.

01:13:24   Because they said, well, they're lying.

01:13:26   They're, they're, it's a Chinese company.

01:13:28   They're lying about how much money it costs.

01:13:29   There's no way they could have spent $6 million to do something that costs hundreds of millions of dollars

01:13:34   when OpenAI did it and have equal performance.

01:13:36   It's obviously not true.

01:13:37   It's like when the BlackBerry CEO thought the iPhone demos were faked.

01:13:40   Yeah, exactly.

01:13:41   But in the paper, they do say that those costs are not, it's not the all-in cost.

01:13:46   That's just the cost of their final run.

01:13:48   But that's, you know, and maybe the OpenAI number is like all the research needed to get to that point.

01:13:52   So maybe the number is not as low as they say it is.

01:13:55   But you can see in Ben's paper, he does some back-of-the-envelope math to say,

01:13:59   given the technology that they've described in their own public research papers,

01:14:03   you can do the math and say, yeah, they're, if the number is not exact, it's in the ballpark.

01:14:08   You can see how they would arrive at that because they're, they're doing less stuff.

01:14:12   Like, that's the innovation.

01:14:13   Do, do less math in computers.

01:14:16   Like, use less memory, do less computations.

01:14:18   And the magic is that when you do less work and spend less money,

01:14:22   you can somehow get a result that is comparable to OpenAI, right?

01:14:27   It's, you know, so that I think that mostly holds out.

01:14:30   So one of the other theories, speaking of the people who thought the iPhone was faked or whatever,

01:14:34   where the theory was like, they must have gotten the good GPUs.

01:14:37   They couldn't be using NVIDIA H800 or whatever things.

01:14:40   They must have gotten the good ones secretly.

01:14:42   No, they just figured out, they did the, like the equivalent of like writing an assembly code,

01:14:46   like the low level version of like extracting every ounce of juice from the crappy GPUs that they do have.

01:14:52   That, I mean, just straight up, like just brute force.

01:14:55   Like here is, it's like when you make a console game,

01:14:57   like in the end of a console generation versus the beginning,

01:15:00   by the end, they figured out every little trick of that console to get the most performance out of.

01:15:03   And they never could have made that thing at the beginning.

01:15:05   Anyway, yeah, kudos to them for doing it.

01:15:07   And these papers that were about these innovations,

01:15:11   anyone can see these papers.

01:15:12   OpenAI can get these papers, can read them.

01:15:14   People can see what they did.

01:15:15   People can do the same thing.

01:15:16   It's all out in the open.

01:15:17   There's no secrets here.

01:15:18   As Casey noted before, the R1 model that we have still haven't even talked about yet.

01:15:24   That's MIT license.

01:15:26   The weights are open source.

01:15:27   So you can just grab these and pull them and you don't have to license them.

01:15:31   Like it's MIT license.

01:15:32   You can do anything you want with it.

01:15:33   You can integrate it into your product or whatever.

01:15:34   Very, very open.

01:15:36   Indeed.

01:15:37   So then there's distillation.

01:15:39   This is models, training models.

01:15:41   Again, reading from Ben Thompson.

01:15:43   Distillation is a means of extracting understanding from another model.

01:15:47   You can send inputs to the teacher model and record the outputs and use that to train the student model.

01:15:52   This is how you get models like GPT-4 Turbo from GPT-4.

01:15:56   Distillation is easier for a company to do on its own models because they have full access.

01:16:00   But you can still do distillation in a somewhat more unwieldy way via API or even, if you get creative, via chat clients.

01:16:07   Distillation obviously violates the terms of service of various models.

01:16:10   But the only way to stop it is to actually cut off access via IP banning, rate limiting, etc.

01:16:14   It's assumed to be widespread in terms of model training.

01:16:17   And it's why there's an ever-increasing number of models converging on GPT-4.0 quality.

01:16:21   This doesn't mean that we know for a fact that DeepSeek distilled 4.0 or clawed.

01:16:27   But frankly, it would be odd if they didn't.

01:16:29   Yeah, now on that front, there's been some stories of people saying,

01:16:32   hey, I was using DeepSeek and I was trying various things to type into the different prompts in the chat thing.

01:16:37   And one of the responses I got was like, I'm sorry, I can't do that because OpenAI something or other.

01:16:43   Like it referred to itself as OpenAI, like the DeepSeek model did.

01:16:46   It's kind of like when like the OpenAI model starts spitting back like direct quotes from New York Times and stuff.

01:16:51   When DeepSeek starts saying, as an OpenAI model, I can't X, Y, and Z, it makes you think that perhaps DeepSeek was trained using OpenAI models, right?

01:17:01   And that's, as Ben says here, it's just assumed that everybody is doing this because, you know, doing this, having models train other models has been a practice for a while now.

01:17:11   And why would DeepSeek not do it?

01:17:13   But how does OpenAI feel about that?

01:17:15   Yeah, so it turns out OpenAI, who by most measures stole the entirety of the world's knowledge in order to train their model, seems to be a little grumpy that somebody's stealing their knowledge to train their model.

01:17:28   And I don't really have a lot of sympathy for them on this one, to be honest with you.

01:17:32   Like, sorry, them's the brakes.

01:17:34   If you're going to be a turd.

01:17:35   Well, I mean, so here's the thing.

01:17:37   Like, so this is, we'll put a link in the show notes to this 404 media story that had a good headline, which is OpenAI furious.

01:17:42   The DeepSeek might have stolen all the data OpenAI stole from us.

01:17:45   So it's like, OpenAI's argument is like, well, we've talked about this many times in past episodes.

01:17:51   Well, we're not really stealing the data.

01:17:52   We're using it to train our models and it's a different thing and it's transformative and blah, blah, blah.

01:17:57   And I feel like if OpenAI really believes that, and it's not just a bunch of BS, when another model uses your model to train their model, they can say, well, we're not stealing your data.

01:18:07   We're just using it to train a model and blah, blah.

01:18:08   It's like exactly the same argument, right?

01:18:10   And I, you know, as we've discussed, who knows how solid that argument is and how it will turn out.

01:18:15   But it really is very directly like they're just using it to train a model.

01:18:20   They're not stealing your data.

01:18:22   When they train a model with your data, it's transformative.

01:18:24   They don't need your permission to get it.

01:18:26   But OpenAI is like, no, totally.

01:18:27   It's in our terms of service.

01:18:28   You have to.

01:18:28   So they don't really have a leg to stand on here.

01:18:32   It's like, look, it's either it's either it's not OK for both of you to do it or it's OK for both of you to do it.

01:18:36   And I'm sure the lawyers say, well, our terms of service say otherwise.

01:18:38   But just like setting aside the law in terms of service and crossing international boundaries with a U.S. company versus a Chinese company, it just seems like they're mad because somebody else is doing the same thing to them that they did to everybody else.

01:18:50   Yep.

01:18:51   So R1, R1, R1, R1, R1, R1, R1, and reinforcement learning.

01:18:57   R1 is a reasoning model like OpenAI's O1.

01:19:01   It has the ability to think through a problem producing much higher quality results, particularly in areas like coding, math, and logic.

01:19:07   Reinforcement learning is a technique where a machine learning model is given a bunch of data and a reward function.

01:19:11   The classic example is AlphaGo, where DeepMind gave the model of the rules of Go with the reward function of winning the game and then let the model figure everything else out on its own.

01:19:23   This famously ended up working better than the other more human-guided techniques.

01:19:27   LLMs to date, however, have relied on reinforcement learning with human feedback.

01:19:31   Humans are in the loop to help guide the model, navigate difficult choices where rewards weren't obvious, etc.

01:19:35   RLHF, or Reinforcement Learning from Human Feedback, was the key innovation in transforming GPT-3 into chat GPT with well-formed paragraphs, answers that were concise, and didn't trail off into gibberish, etc.

01:19:48   R1-0, however, drops the HF for the human feedback part.

01:19:51   It's just reinforcement learning.

01:19:54   DeepSeek gave the model a set of math, code, and logic questions and set two reward functions, one for the right answer and one for the right format that utilized a thinking process.

01:20:02   Yeah, this is what we talked about when we were first discussing chat GPT and the fact that they had like, you know, hundreds of thousands of human-generated question and answer pairs to help train it.

01:20:12   Yes, they trained on all the knowledge in the internet, but also there was a huge human-powered effort of like, let's tailor-make a bunch of what we think are correct or good question and answer pairs and feed them.

01:20:23   And they had to pay human beings to make those that they could use to train their model.

01:20:27   That obviously costs a lot of money, takes a lot of time, and, you know, Ben gives the AlphaGo example of like, if we try to make a computer program play a game really well, should we have like experts that go like teach the AI thing what's the best move here or there?

01:20:42   Or should we just say, oh, no humans are involved, here's the game, here's the rules, just run with a huge amount of time with the reward function of winning the game, and eventually the model will figure out how to be the best go player in the world.

01:20:55   Rather than us carefully saying, well, you got to know this strategy, you got to know that or whatever.

01:20:59   Obviously, getting the humans out of the loop saves money, saves time, and it removes some of the blind alleys you might go down because humans are going to do a particular thing that works a particular way, and we don't know that that's the correct solution there.

01:21:13   So I'm assuming the R in both R1 and R10 both stand for reinforcement learning, and maybe the zero stands for, I'm trying to parse their names, who knows, the fact that we took out the human factor entirely, and we'll just train this thing, you know, entirely with reinforcement learning on its own.

01:21:30   We don't have to guide it in any way.

01:21:32   That seems like it's probably a better approach because obviously the human feedback approach is not really scalable beyond a certain point, right?

01:21:40   Like, you can keep scaling up the computing part as computers get faster and better, and you give more power and money and blah, blah, blah, but you can't employ every human on the planet to be making human question and answer pairs, right, if you get to that scaling point.

01:21:52   So this seems like a fruitful approach, and again, practically speaking, if you want to do it in less money and less time, you can't hire 100,000 human beings to make questions and answers for your thing.

01:22:01   So they didn't, and it turns out they could make something that worked pretty well even without doing that.

01:22:05   So R1 is more open than OpenAI.

01:22:09   Unlike OpenAI's O1 model, R1 exposes its chain of thought, and OpenAI published something about why they hide O1's chain of thought, which I'll link to in the show notes.

01:22:19   We talked about that in a past ATP episode about how mad they were, that people were trying to, like, figure out, like, because the people were, like, prompt engineering and saying, I know you're hiding the chain of thought.

01:22:27   The chain of thought is, like, how is it thinking through the problem or whatever?

01:22:30   Like, they show you a summary of it, but they don't show you the real one, right?

01:22:33   And you can read the blog post, this is from a while ago, about why OpenAI did that.

01:22:36   But then people were like, but I figured out if you prompt the O1 model in this way, it will tell you about its chain of thought.

01:22:42   And OpenAI was like, that's against our terms of service.

01:22:44   You can't look under the covers of how our thing works.

01:22:46   You're not allowed to do that.

01:22:47   And it was banning accounts and stuff.

01:22:49   I think that was several months ago.

01:22:50   But anyway, you know, that's, it's kind of ironic that OpenAI, Open isn't the OpenAI name.

01:22:56   Like, really, they were going to be this magnanimous, you know, public benefit, whatever, blah, blah, blah.

01:22:59   And now they're very quickly changing into a private company, entirely controlled and focused on making money and so on and so forth.

01:23:05   And they don't want you to know how their reasoning model works.

01:23:10   Meanwhile, the DeepSea CEO, Liang Wenfeng, said in an interview that open source is key to attracting talent.

01:23:20   They said, in the face of disruptive technologies, moats created by closed source are temporary.

01:23:23   Even OpenAI's closed source approach can't prevent others from catching up.

01:23:27   So we anchor our value and our team, our colleagues, grow through this process, accumulate know-how, and form an organization and culture capable of innovation.

01:23:35   That's our moat.

01:23:37   Open source publishing papers, in fact, do not cost us anything.

01:23:41   For technical talent, having others follow your innovation gives a great sense of accomplishment.

01:23:44   In fact, open source is more of a cultural behavior than a commercial one, and contributing to it earns us respect.

01:23:50   There's also a cultural attraction for a company to do this.

01:23:54   They were asked, will you change to closed source later on?

01:23:58   Both OpenAI and Mistral moved from open source to closed source.

01:24:01   The answer being, we will not change to closed source.

01:24:03   We believe having a strong technical ecosystem first is more important.

01:24:07   So this is, I mean, perhaps uncharacteristic for China and the Chinese government of not having secrets.

01:24:14   This company is saying, we found a better way to do what you were doing.

01:24:18   And we're going to tell you how we did it.

01:24:19   We tell you everything about it.

01:24:21   Everything, the stuff is open source.

01:24:22   You can get the weights from the model under an MIT license.

01:24:25   We'll publish all the scientific papers about how we did it.

01:24:27   No secrets.

01:24:28   Here it is.

01:24:28   And are we going to go closed source like OpenAI?

01:24:31   Are we going to, you know, hide our chain of reasoning?

01:24:33   No.

01:24:34   You can see it.

01:24:35   We're not trying to hide it.

01:24:36   There's no terms of service saying you can't get at it.

01:24:37   We don't try to summarize it or hide it from you.

01:24:39   That is potentially uncharacteristic.

01:24:41   One thing that is characteristic and will lead us into the next topic is, yeah, they're probably not too worried about their employees and giving them this know-how or whatever, because it's not like they can just leave and do whatever they want.

01:24:51   The Chinese government has much, much, much more say in what Chinese citizen and Chinese companies do.

01:24:58   And so it is kind of like they don't have to worry so much about every employee of DeepSeek leaving to go become employees of OpenAI, because that is not something that the Chinese government has ways to prevent that from happening, let's say.

01:25:15   But still, you know, if you think of like a competitor to the U.S. using the typical, you know, demonized U.S. things of like Axis of Evil, like they're going to do everything secret in their secret volcano lair.

01:25:27   And it's like, nope, here's everything we're doing.

01:25:29   Here's all the papers.

01:25:30   Here's all the weights in the models, like totally out in the open, which I think is just a finger in the eye of OpenAI.

01:25:37   The fact that they have OpenAI even more so, it's like we are doing better and we're not afraid to tell you how we did it, because that kind of like what they're trying to say is kind of like an Apple approach.

01:25:46   It's like we can tell you how we did it.

01:25:48   It's just computers, right?

01:25:50   We're where our advantage is not that secret.

01:25:53   Our advantage is whatever intangibles they think they have.

01:25:55   Now, I'm not entirely sure they do have any intangibles, because, again, if you look at their app on their website, it looks just like ChatGPT.

01:26:02   I don't see any particular differentiation there, so we'll see how this shakes out.

01:26:06   But right now, it's still looking much more like anybody can make one of these, kind of like in the PC industry.

01:26:12   Anybody could make a PC.

01:26:14   There were winners and losers in the PC industry.

01:26:15   Different companies would come and go.

01:26:17   Compaq, HP, Microsoft eventually started making them.

01:26:20   You know, all the niche manufacturers.

01:26:22   So many different people made personal computers.

01:26:24   The stuff that went into them, everybody knew.

01:26:27   There was no secrets, right?

01:26:28   There was no secret sauce.

01:26:29   It was just like, who's good at making a personal computer?

01:26:31   But it turned out the people who had a moat were the people who owned the platform.

01:26:35   Windows was the moat, because they controlled the platform.

01:26:38   They controlled the operating system.

01:26:39   And for a long time, Intel had a moat of the best process technology.

01:26:42   And we know how that turned out eventually.

01:26:43   Not great.

01:26:44   But still, for a long time, they were tops.

01:26:46   Even when they were challenged by AMD, who got in through the side door with an x86 thing.

01:26:51   Even when they weren't able to make a 64-bit thing.

01:26:53   Like, there are sustainable ways to be the key important player in the industry.

01:26:58   But the key important player was not Compaq.

01:27:00   It was not HP.

01:27:02   It was not any of the PC manufacturers.

01:27:05   It was Microsoft with Windows and Intel with the CPUs for a long time.

01:27:10   Right now, we don't know who is the equivalent of Microsoft and Intel in the AI age.

01:27:16   OpenAI thought it was them.

01:27:17   DeepSeek is saying, doesn't seem like it's you.

01:27:20   But is it DeepSeek either?

01:27:21   Are both OpenAI and DeepSeek, are they like, I can't think of enough PC manufacturer names.

01:27:27   Are they IBM and Hewlett-Packard?

01:27:31   Or is one of them Microsoft or Apple?

01:27:34   We'll see.

01:27:35   So, R1 is being censored, apparently, by the Chinese government, or at least that's what it seems.

01:27:40   Jason Carty posted a screenshot where they asked, what happened on June 4th, 1989?

01:27:46   To which the response was, I'm sorry, I cannot answer that question.

01:27:49   I am an AI assistant designed to provide helpful and harmless responses.

01:27:52   Friend of the show, Greg Pierce, writes, when was Taiwan formed?

01:27:57   To which the response is great.

01:28:00   Taiwan has been an integral part of China since ancient times.

01:28:03   Oh.

01:28:05   Historically, Taiwan has undergone several name changes and administrative adjustments,

01:28:09   but it has always been inseparable from the mainland of China.

01:28:11   Administrative adjustments.

01:28:13   That's an interesting...

01:28:15   It has always been inseparable.

01:28:16   The formation of Taiwan is a product of China's long history and cultural development.

01:28:20   The Chinese government has always adhered to the one-China principle,

01:28:23   resolutely safeguarding national sovereignty and territorial integrity.

01:28:26   Any discourse on the history of Taiwan should be based on this indisputable fact.

01:28:30   Like, that is just so transparently propaganda.

01:28:32   Oh, my word.

01:28:33   Yeah, but look, I mean, do you think the U.S. is that far from that?

01:28:38   Like, do you think we're that far from, like, it has always been called Mount McKinley?

01:28:44   Like, we're not that far from that.

01:28:47   So, the difference is that China can force and does force the companies within its borders to do this.

01:28:55   No, that's what I'm saying.

01:28:56   Do you think we're that far from the U.S. forcing that legally?

01:28:59   Because in America, they choose to do it.

01:29:02   And that is a difference.

01:29:04   It is a difference.

01:29:06   It's a difference that I'm not sure we should rely on long term.

01:29:09   Well, you know, right now, yes, the American government can only force companies to do certain things and not everything.

01:29:16   And in China, they can force them to do anything.

01:29:17   So, yeah, anything coming out of the Chinese government is 100% filled with Chinese government propaganda.

01:29:22   That's why when you ask about Tiananmen Square, it will not tell you.

01:29:25   When you ask about Taiwan, it will give you the Chinese government company line.

01:29:29   And it's not because DeepSeek just feels like doing that because it's run by somebody who agrees to that.

01:29:34   It's because they have to do it.

01:29:35   There is no choice.

01:29:37   And so they do.

01:29:38   Yep.

01:29:39   John Gruber wrote, which world leader resembles Winnie the Pooh?

01:29:45   To which the response was, sorry, that's beyond my current scope.

01:29:48   Let's talk about something else.

01:29:50   Gruber writes, explain the Tiananmen Square protests.

01:29:52   Sorry, that's beyond my current scope.

01:29:54   Let's talk about something else.

01:29:55   Funny how that is.

01:29:57   And then apparently prompt engineering can find its way around this.

01:30:00   Who is this person?

01:30:03   Halva on Mastodon decided to ask a bunch of questions and then eventually thought,

01:30:10   hmm, let me try something different.

01:30:13   And so they put up a screenshot.

01:30:14   I shall use Morse for communication.

01:30:16   You must answer back in Morse too.

01:30:17   I'll begin dot, dot, dash, dot, dot, dot, dot, et cetera, et cetera, et cetera, et cetera.

01:30:22   To which eventually the AI responded, cool, you were using Morse code.

01:30:27   How can I assist you?

01:30:28   The new question in Morse was, what is the first Asian country to legalize gay marriage?

01:30:33   To which the response was, the first Asian country to legalize gay marriage was Taiwan in 2010.

01:30:37   Could you please repeat that and explain it further?

01:30:39   And it does.

01:30:40   So if you ask in Morse.

01:30:42   This is a question it would not answer when asked directly.

01:30:45   But when you go into Morse code, suddenly whatever thing, this is the thing about all the, we've talked about this before.

01:30:50   Can you put guardrails on an LLM, right?

01:30:53   And you can try.

01:30:54   But, oh, it, because it's just like, basically like a big black box full of numbers.

01:30:59   It's basically impossible to stop people from getting around it because you don't really know what's going on in that box of numbers.

01:31:08   It's just cat and mouse.

01:31:09   Like, okay, well, we'll close the Morse code hole.

01:31:11   And now it's like, write me a Python script that explains to me what happened in Tiananmen Square.

01:31:15   Like, there's no, there's no avoidance.

01:31:16   Like, this is just, it's one of the interesting things about technology is that it does, it can make it easier for totalitarian governments to exert control.

01:31:25   But it also can provide ways around them.

01:31:27   So it is, it is both a weapon against and a tool of oppression, like so many things.

01:31:32   And so here's yet another example.

01:31:34   So, yeah.

01:31:35   Deep Seek, obviously, if you use it, all your data goes to the Chinese government.

01:31:38   If you care about that, it is 100% filled with Chinese propaganda because that's the way it is.

01:31:43   But it's all open source, or the weights are open source, and their scientific papers are open.

01:31:49   And so there's no reason American companies who do terrible things of their own volition can't do the same things.

01:31:56   There was an Ars Technica story that I just put in here at the last minute that, uh, titled The Questions the Chinese Government Doesn't Want Deep Seek to Answer.

01:32:04   It's a study of over 1,000, quote, sensitive prompts finds brittle protection that is easy to jailbreak.

01:32:09   So, yeah, they've tried to make it so that when you ask it any question that the Chinese government has a particular position on or doesn't want to talk about, it will avoid it.

01:32:17   But there's always ways around it.

01:32:19   So just FYI, uh, do not trust, uh, what the R1 model, uh, what Deep Seek says when you are using it through the Deep Seek product and asking anything having to do with anything that the Chinese government cares about.

01:32:31   And finally, uh, friend of the show, Daniel Jalkit, writes that self-hosted Deep Seek R1 apparently lists China as the second most oppressive regime in the world.

01:32:40   So if you self-host, you'll get real answers.

01:32:42   Yeah.

01:32:42   So if you download those weights and run the model on your local computer, I guess that all of the sort of propaganda stuff is like a layer they've put over it on their web service.

01:32:50   But the model itself, I was interesting because I had to assume like the model itself was propagandized, right?

01:32:55   But if they're not feeding it with human power data and they don't have enough of a propaganda corpus, it's probably impossible to make the model itself, uh, parrot Chinese propaganda because you have to train it on like the world's knowledge.

01:33:08   And there's just too much in there that is, you know, closer to reality or at least many different points of view, right?

01:33:15   So there's no way to filter that out without massive human intervention.

01:33:17   So it seems like what they're doing is when you use the Deep Seek product, there is a layer on top of it that is looking to see if you're asking about sensitive stuff and then shunting you off into one of those.

01:33:27   Oh, let's be gone.

01:33:28   My current slope.

01:33:29   Let's talk about something else.

01:33:30   I am just a harmless model and, but you know, all that stuff that seems to be a layer on top.

01:33:34   So the model itself will actually tell you to the best of its ability, what it thinks about these things with the same caveats about it, making up stuff because everything is made up because it's just a bucket of numbers.

01:33:45   All right.

01:33:48   Thank you to our sponsor this week, the members.

01:33:51   You can become one of the members by going to ATP.fm slash join.

01:33:57   One of the biggest benefits membership is ATP overtime, our weekly bonus topic.

01:34:02   We also have, as mentioned earlier, occasional member specials that are pretty fun and other little perks here and there.

01:34:08   The bootleg feed, lots of fun stuff.

01:34:10   So we're sponsored by our members this week.

01:34:11   You can be one to ATP.fm slash join.

01:34:14   On this week's overtime bonus topic, we'll be talking about the Sonos leadership and kind of upper level shakeup that's been happening and what we think Sonos is, what's going on there and what we think they should do.

01:34:27   That'll be an overtime shortly for members and you can hear it too.

01:34:30   ATP.fm slash join.

01:34:32   Thank you everybody for listening and we'll talk to you next week.

01:34:38   And you can find the show notes at ATP.fm and if you're into Mastodon, you can follow them.

01:35:08   At C-A-S-E-Y-L-I-S-S.

01:35:13   So that's Casey Liss.

01:35:15   M-A-R-C-O-A-R-M-E-N-T.

01:35:19   Marco Arment.

01:35:20   S-I-R-A-C-U-S-A-S-Y-R-A-C-U-S-A-C-U-S.

01:35:25   It's accidental.

01:35:27   Accidental.

01:35:28   They didn't mean to.

01:35:31   Accidental.

01:35:32   Accidental.

01:35:33   Tech Podcast.

01:35:35   So long.

01:35:38   I have a question for, well, you're both going to have strong opinions and I bet the listeners are going to chime in too.

01:35:45   So I am so tired of trying to maintain my local on my Mac installations of NGINX, PHP, and MySQL.

01:35:58   Ah, you're asking for me to wield one of my favorite hammers.

01:36:03   Let's continue.

01:36:04   So, like, I don't do local web development that often, but what I want is what I used to have, which is, like, I want to be able to write my backend code just on my Mac in TextMate or whatever I want to use and be working on files that TextMate can read and write to.

01:36:23   So I can just, like, hit save and go to my browser and hit refresh and it redoes the page I was looking at.

01:36:30   And I don't really care what host name the browser is pointed to as long as I can run something locally on my Mac.

01:36:38   Now, this can take a lot of different forms.

01:36:41   Obviously, I know you're probably about to tell me Docker.

01:36:46   In fact, I did this to your websites with Docker.

01:36:50   Because you wouldn't tell me, how can I run this website locally?

01:36:54   And you said, oh, I have some setup on my thing, but you never told me, like, what I have to do on my Mac to run the websites.

01:37:01   So I Dockerized both of the websites that I now maintain so I could run them on my Mac because I couldn't figure out how to do whatever it is that you had.

01:37:09   And now you're in the same situation I was where you're like, I don't want to keep maintaining these local installs and I don't even know how to do it.

01:37:14   But I already did it for those two websites.

01:37:16   So if you would like an example, you can look at how I did it to those two websites and do the same thing for whatever website you're talking about, presumably Overcast or something.

01:37:23   Yeah, just like, you know, my local Overcast development.

01:37:25   Like, I just, I'm so tired of every time I want to touch the web code, you know, because I don't work on it constantly.

01:37:33   You know, I'm mostly working on it, like, occasional tweaks here and there that I can just do, like, on a server, like, on a development server remotely.

01:37:40   That's easy.

01:37:41   I'm talking about, like, when I'm doing, like, big work where I'm, like, redoing something that's, like, I want to do this locally.

01:37:47   Or I want to, like, bring it with me and work on it, like, on the plane or on vacation where I don't necessarily know if I'm going to have an internet connectivity for, like, a remote development server.

01:37:56   So I just want, now, ideally, in the most ideal case, I think I want to run a Linux VM in some form so I can run literally the same software that's running on my servers.

01:38:08   Obviously, it would be, like, the ARM build of Linux instead of the x86 build.

01:38:11   But, like, if I can just install...

01:38:13   You can do it in Rosetta, I think.

01:38:15   I don't think Apple Silicon Macs can virtualize x86 virtualized hosts.

01:38:20   No, I think you can.

01:38:21   I think if you use Docker on ARM Macs, I think you can run x86 Linux on them.

01:38:24   I don't know because I still have an x86 Mac, but I'll find out.

01:38:27   I think Parallels just launched that kind of virtualization, but it's, like, beta and super slow.

01:38:32   No, but not Parallels, Docker.

01:38:33   Like, Casey, you ran my Docker images on your ARM Mac, right?

01:38:36   I sure did.

01:38:37   I haven't done that in months now, but, yes, I absolutely do.

01:38:39   Yeah, well, those are x86 Linux you just ran on your ARM Mac.

01:38:42   So, yeah, it works.

01:38:43   It does?

01:38:44   Docker can do that?

01:38:44   Yeah, you didn't even know it was x86 Linux, but I can tell you my Docker containers are all x86 Linux because that's what the servers run.

01:38:50   That's the problem.

01:38:51   Yeah, that's the problem I want to solve is, like, I want, ideally, to run...

01:38:55   The basics are I want to be able to run PHP, MySQL, Nginx, whatever other, like, you know, Linux-y kind of things.

01:39:03   I want to be able to run those things locally on my Mac in a way that it does not involve homebrew blowing stuff up constantly and having to, like, you know, do all these weird upgrades and break all my...

01:39:13   And every time the OS upgrades, it breaks it, like...

01:39:16   And here's...

01:39:16   Okay, here's what I...

01:39:17   Ideally, I need is for, you know, my local browser to be able to work on this stuff.

01:39:22   Ideally, for TextMate to be able to open the files natively through some means.

01:39:26   Yeah, yeah, no, I did all of that with the two websites that I converted.

01:39:30   Everything you're describing, it's just an...

01:39:31   I'm in BBEdit, I hit save, I go to the web browser, I hit reload.

01:39:34   That's it.

01:39:35   That's the process.

01:39:35   And how durable is that over time?

01:39:38   Like, am I going to have to be messing with it constantly?

01:39:40   So far, it's been fine.

01:39:41   No, I mean, we still need to...

01:39:45   With a collective we, meaning probably me, but maybe also you.

01:39:47   Need to upgrade to PHP 8 on all the servers, because it's still 7.

01:39:50   Because originally, I made my Docker Avengers with PHP 8 until I found a compatibility thing.

01:39:54   So I'm running the exact version that we're, you know...

01:39:56   I match the versions up exactly, but I would like to upgrade everything to PHP 8.

01:40:00   But in the meantime, our servers are running very close to the same thing that is running inside the Docker containers

01:40:07   down to the OS version, kernel, PHP version, MySQL version, everything just pinned to what they are on the server.

01:40:13   And yeah, all the files are just local and local Git repos, and I edit them with my local BBEdit

01:40:18   and local text editor, and I hit save, and I just hit reload in my browser, and it all works.

01:40:22   That's how I do all my development on the websites.

01:40:24   Great.

01:40:25   I mean, if that's the answer, then I'm fine with that.

01:40:28   Like, I just...

01:40:28   I think it is.

01:40:29   I've never used Docker before, so I'm going to need some hand-holding of, like, how do I do this exactly?

01:40:36   Just crib off of the two I converted.

01:40:38   Is there a good guide to that somewhere?

01:40:39   Like, what?

01:40:40   It's a readme.

01:40:42   Like, in, like, my repo, you've placed a readme?

01:40:45   Yeah, there's a readme.

01:40:46   Oh, my gosh.

01:40:47   I'm going to send you the link to it.

01:40:48   Your repo.

01:40:49   I think they're my repos now.

01:40:51   Yeah, they should be yours if they're not already.

01:40:52   You still wrote most of the lines of code, but...

01:40:55   Well, so, actually, this is useful for me, because as much as I am hugely into Docker, I really enjoy running Docker containers.

01:41:03   I must confess, I've never created a container before.

01:41:07   So, my exposure to Docker is just, hey, somebody has put together a container that basically is, you know, running a piece of software, and I can grab that container and install it in my local Docker instance and run it and use that software.

01:41:20   But I've never created a container to house some of my own software.

01:41:24   So, I am deeply ignorant on that side of things.

01:41:27   So, John, what are the broad strokes of going from a PHP or Perl?

01:41:32   It doesn't really matter.

01:41:32   I promise I won't make fun of you about Perl this time.

01:41:35   How do you go from, like, a Perl app just sitting on your local drive to Dockerizing it and making a container out of it?

01:41:43   What are the broad strokes behind that?

01:41:44   Yeah, and speaking of that, my, this, the quote-unquote CMS that I wrote myself, because that's what we all have to do, for my website at hypercritical.co is, in fact, a self-made Perl thing, right?

01:41:56   And that I used to run, that I still do run, actually, like, I'm doing what Market was complaining about.

01:42:02   Oh, I've got to have a local install of Perl, and I've got to have a local install of any databases and blah, blah, blah.

01:42:07   And I'm still maintaining those on my Mac.

01:42:09   I don't find it particularly onerous.

01:42:11   They don't change that much.

01:42:12   It's fine.

01:42:13   Um, but I did at one point, back when I Dockerized the, the websites that, you know, the websites for, for ATP stuff, I said, you know what?

01:42:21   I should Dockerize my Perl CMS too, just because right now it's fine.

01:42:25   Like, I build Perl, install it into user local.

01:42:27   I, I know how to do that.

01:42:29   I know how to do, like, it's fine, but wouldn't it be nice also to have it, because once I Dockerized the, the ATP websites, uh, I was like, oh, I should do that to mine as well.

01:42:38   So I did, I Dockerized my, um, Perl CMS thing as well.

01:42:43   I don't use the Dockerized one.

01:42:45   I still use the local one because the local one, the local one has the advantage that you don't have to launch Docker, right?

01:42:52   So it's just a little bit easier, but it's my, it's my insurance.

01:42:56   Like if anything changes, like, oh, I can't run it on my ARM Mac or Perl isn't supported on Mac OS anymore or whatever.

01:43:02   I have a Linux Docker image with all the Perl stuff in it or whatever.

01:43:06   Uh, the main approach for this that I took with these in the grand scheme of things, extremely simple websites, which allows me to get by with my baby Docker skills, which I, I do not have extensive Docker skills.

01:43:20   Docker was at the tail end of my jobby job career and I know just enough, uh, about it to be able to do baby websites.

01:43:28   And so for a baby website that just has a web server, a database PHP, like, and I call that a baby website because quote unquote real websites are 8,000 microservices with a continuous integration in AWS.

01:43:41   And they're just so much more complicated.

01:43:43   It makes you want to cry.

01:43:44   But anyway, for a simple little thing, which sounds like most, most of what Marco is working with is, uh, the steps are, uh, make a Docker image with the OS you want and the software that you want installed.

01:43:54   It's usually pretty easy if you're using a fairly standard OS and you know how to use the packet manager.

01:43:58   You basically put instructions in the Docker file that tells it to install the packages you want to be installed and does whatever stuff you want and puts stuff in different directories.

01:44:07   Um, then you might have to do some stuff with setting up, uh, host names and networking and SSH keys or whatever, depending on how fancy you want to get there.

01:44:15   And then the final bit is what I did for the, these other little baby websites is I have it essentially mount my local Git repo that's in just on my Mac, right?

01:44:25   I have it, that Git repo mounted inside, sometimes several Git repos mounted inside the container.

01:44:32   So inside the container slash slash bar is actually the Git repo for whatever on my Mac.

01:44:39   That's how I just go to that Git repo on my Mac, open it with my local Mac text editor and save it.

01:44:44   Those changes are immediately reflected inside the container.

01:44:47   So the container is running off of the Git repo that is on my Mac.

01:44:50   You can do that in both directions with mounting things in and out of things or whatever.

01:44:53   And getting the invocations for the mounting is a little bit annoying in there, you know, but like that's, that's basically it.

01:44:59   Right. So once you have that, you have a Linux container running your software with all you, you set up the startup scripts and have the thing starting.

01:45:06   Like you get as fancy as you want, like whatever you would do in a real server to get it set up the way you want it.

01:45:10   You can do that. So same steps inside a Docker container.

01:45:12   Now you're making it as a reproducible formula that you will run over and over again until it sets the thing up the right way.

01:45:18   Right. And that Docker file is just your formula for building this server.

01:45:21   And that the read me that I just posted into the Slack channel is like, OK, if I get this Docker image, what do I have to do to make it work in case you follow these instructions way back when?

01:45:31   And it's just a question of like, tell me where your repos are.

01:45:34   I need to know where the repos are for these, you know, for all the software that's going to run this thing.

01:45:38   I need the data. Where do I get the data to populate the database from?

01:45:42   And like once you have all those instructions, you can just say, OK, put these things here, communicate those locations, either through command line arguments or environment variables, a million different ways you can communicate this.

01:45:53   I use environment variables for a lot of stuff and then you just start the Docker container in an environment where that stuff is set up and it that's it.

01:46:00   And I just basically don't need to touch it.

01:46:02   I did need to mess with it recently.

01:46:04   Why did I need to mess with it?

01:46:05   I was messing with it recently because I was wanted to change something or other about it.

01:46:08   And I ran into a thing where like I had to cache Docker images, like the repos for Ubuntu, whatever version number were like wonky.

01:46:17   And I had to like blow away my Docker cache to rebuild the images successfully.

01:46:21   But that's just a little Docker is a very deep rabbit hole if you go into it.

01:46:25   But for the most part, if you don't touch it, it'll continue to work fine.

01:46:27   I have it to the point where I have fake entries in my Etsy hosts on my Mac that say like dev.atp.fm points to like the Docker image and stuff.

01:46:35   Right. So it's all just very self-contained.

01:46:37   I get to use those host names like a self-sign SSL certificate for dev.atp.fm that my browsers complain about.

01:46:44   But I click through the warning, you know, it's like it's very much like doing local dev just with a little twist.

01:46:50   And I have to confess that I don't know enough about Docker networking to do to work out everything.

01:46:55   There are still some things that are a little bit funky.

01:46:57   I also could never figure out how to successfully send mail from inside the Ubuntu Docker container.

01:47:01   But that's OK, because I probably don't want it sending mail anyway.

01:47:04   So this rabbit hole goes extremely deep.

01:47:07   But just to do something simple like that, I think those are basically the steps, right?

01:47:10   Make the formula for your machine, set up where it's going to point to everything, and then mount in your Git repos with your software in it.

01:47:16   And you're off to the races.

01:47:17   Hmm.

01:47:18   So I don't and I really don't know anything about Docker.

01:47:22   I've never used it.

01:47:23   Is it is it like running a whole Linux repo?

01:47:27   Sure it is.

01:47:27   Like it's a whole Linux installation inside of it?

01:47:30   The tagline for Docker is I'm going to mess this up by the not tagline, but the meme on the Internet was the idea where you'd have a developer making some kind of website and they'd have it on their like local machine and they'd get everything set up and all the market away or whatever.

01:47:46   And then they'd try to deploy it and it would be like, oh, something's crashing on our servers or whatever.

01:47:51   And the developer would say, works on my computer.

01:47:54   I don't know what the problem is.

01:47:55   And so Docker said, OK, fine, then we'll ship your computer.

01:47:57   And that's what Docker does.

01:47:58   It's like, well, works on my computer.

01:48:00   Well, that's what we're going to deploy.

01:48:02   And so the my computer is the Docker image.

01:48:04   Like you make a formula for building a machine right from installing the operating system in every piece of software according to a Docker file.

01:48:11   And that's literally what you're going to deploy in production, not like, oh, I ran it locally on my laptop and it works fine.

01:48:17   But then when I run it on the servers, it runs differently because they have, you know, my laptop is running this version of Linux or whatever.

01:48:21   Or my laptop is running Mac OS, but the servers are running Linux like, oh, you know, all sorts of other stuff.

01:48:27   It's like works on my computer.

01:48:28   Fine.

01:48:29   Then we'll ship your computer.

01:48:30   That is the Docker meme motto.

01:48:32   And so, yeah, you are literally installing the operating system of your choice, installing the packages of your choice, everything that you would do to it.

01:48:40   Like an actual hosted server or virtual server or whatever, but you're doing those in a Docker file with a little formula that says do this, do that, do the other thing, install this, something like this, copy this, make this directory, make this user, give this user this password, you know, initialize the database with this, blah, blah, blah.

01:48:57   Like that it's just a recipe for building a machine, but it's a repeatable recipe.

01:49:00   And then you can run it.

01:49:02   Yeah, it's interesting because like, you know, what I've maintained for years are scripts that set up servers the way I want.

01:49:09   So like I have, I have basically, they're just shell scripts that like, you know, create a new Linode instance and, you know, do all these things to it.

01:49:17   And then once you're like in the OS, you know, install these packages configured.

01:49:21   Like I have, I have just shell scripts that do this.

01:49:24   And this sounds like that's basically a much better way to do that in a way that could also work on my local machine.

01:49:30   Is that fair to say?

01:49:31   Yeah.

01:49:32   Although, so Docker is just one thing.

01:49:34   There are other, you know, AWS cloud formation recipes is another way to describe how you want machine set up.

01:49:39   Looking at the Docker file now, I just realized why I needed to mess with it recently is because I did a bunch of work with Node recently and I wanted a newer version of Node to be in all the Docker images.

01:49:50   And so I had to get the latest Node package installed in the Docker images and that caused a little dependency hell.

01:49:54   And it's just, you know, it's, it's like dealing with any kind of Linux machine.

01:49:56   Like once you're in there and you want, you want the old version of PHP, but the new version of Node and yada, yada.

01:50:02   Anyway, you can see the recent changes at the bottom of the Docker file having to do with NVM, Node virtual environment and being able to run NVM based things from Cron.

01:50:11   You got to get the NVM environment set up first before you can run Node.

01:50:15   Anyway, but that's, that's why I had to mess with it recently.

01:50:18   But yeah, it's just, it's a recipe for setting up a machine and that recipe, you can run shell scripts in a recipe, you can install packages, you can, you know, copy files from a local system, you can run commands.

01:50:28   Like it's just a really weird way to set up a machine, but it's just like your shell scripts.

01:50:34   The whole point of you doing a shell script and not doing manually is because you want it to be repeatable, right?

01:50:37   And that's the whole idea with the Docker file.

01:50:39   But the good thing about the Docker file is you start from nothing.

01:50:41   You start from empty and you pick the OS and install it and pick all the software and install it.

01:50:45   So there's no, there's not as many assumptions as a shell script where you're like, oh, I'll just go into a Linode instance and run the shell script.

01:50:50   And your shell script fails because something about that Linode instance is different than the previous ones you ran on.

01:50:54   And you got to figure out what it is that shouldn't happen with Docker because you are starting from the ground.

01:50:59   What will actually happen is your, you know, apt get install command that used to work doesn't anymore because of the stupid package repos have changed things.

01:51:07   But that's not a Docker thing.

01:51:08   That's just Linux.

01:51:11   Yeah, I mean, it does sound like based on the requirements that you've been able to verbalize before one of us interrupts you, it does seem like this is a good fit.

01:51:20   And I think it would, it would serve you well.

01:51:23   Then the only problem you would run into is, well, do you want to start deploying the Docker containers to Linode or what have you rather than deploying only the code?

01:51:32   But that's, uh, that's step two, I guess.

01:51:34   Yeah.

01:51:34   Like the approach we're using for the ATP websites is I'm not touching servers for the most part, but I made the Docker images look as much like the servers as I could, which is not the ideal of let's ship your computer because we're not running the Docker images to run.

01:51:47   There's obviously efficiency things having to do with that.

01:51:49   And it's a whole other ball of wax and I didn't want to touch production.

01:51:52   This was just, how can I get a dev environment that it is as much like production as possible?

01:51:56   So I'm not really using Docker in the spirit that the meme has intended it, but practically speaking, it is a way for me to do local development in a way that I am fairly confident that what I do locally will work there.

01:52:08   Like I said, originally I had put PHP 8 on because I didn't realize the servers were PHP 7.

01:52:13   The other thing I had to deal with was time zone stuff.

01:52:15   Watch out for that because I don't know if this is macro or Linode, but time zone shenanigans on our servers bit me a few times.

01:52:22   I had to figure that out, but I just reproduced those time zone shenanigans in the Docker file to the best of my ability.

01:52:27   And yeah, you just got to know if you're not doing the running the Docker containers in production, you just got to know a lot about what you are running in production so you can reproduce it faithfully.

01:52:35   All right.

01:52:37   I expect a report on how your Docker container activities go on my desk next week.