00:00:08 ◼ ► A few months ago, we were talking about the AirPods Pro Noise Cancelling Modes, and I had
00:00:16 ◼ ► said I had tried the adaptive mode for a little while, and I didn't like it, and I switched
00:00:28 ◼ ► that too much street noise when I was walking around Manhattan, and I just didn't like it.
00:00:31 ◼ ► I don't entirely agree with you, but I never got to the point that I wanted it removed from
00:00:45 ◼ ► And I'm pretty sure the way we talked about it was because I got the AirPods 4, and I was
00:00:50 ◼ ► asking you guys about the different modes that they do, because I didn't know which modes
00:00:55 ◼ ► Or I think, at least at that point, you reiterated your opinion that you didn't like adaptive.
00:00:59 ◼ ► And we got a couple of notes from listeners basically saying, try it again, it's really
00:01:03 ◼ ► And I tried it again for like, you know, a couple of days, and I hated it, and I went back.
00:01:19 ◼ ► And I realized over the last week or so that I have just left it on for like months now.
00:01:37 ◼ ► And so I actually have come around that whatever the, you know, the current, the new version
00:01:43 ◼ ► or the current version of adaptive noise cancellation mode is indeed good enough to use as my main
00:01:51 ◼ ► There's, it's, I've almost never switched it, like since the most recent try, I almost never
00:02:11 ◼ ► Uh, so this is, if you recall, is one of the perks of being an ATP member, which you can,
00:02:23 ◼ ► Uh, once a month we do some sort of bonus content that is not really canon, I guess you
00:02:28 ◼ ► It's just usually way off in a, in a different world, so to speak, in this case, kind of
00:02:45 ◼ ► Marco made a comment on a, uh, earlier episode that anyone listening would have thought was
00:02:50 ◼ ► Marco making a reference to a famous scene in a star Trek movie, but it turns out Marco has
00:03:20 ◼ ► Uh, you can do the Syracuse approach, which is wrong and join for a month and then, you
00:03:28 ◼ ► And you can slurp, you know, slurp up all the, uh, member specials while during that month.
00:03:33 ◼ ► And then you can, you know, walk away and listen to them all, or you can do the right thing,
00:03:40 ◼ ► You can join us for a month, a year, however long as you, as you feel and just enjoy all
00:04:06 ◼ ► A lot of people wrote it, including Matt Rigby to tell us that they think it's because Honda
00:04:19 ◼ ► Uh, the GM ones, uh, don't have car play because GM, uh, wants that subscription revenue and does
00:04:28 ◼ ► In my head canon, that is 100% the reason that may not be reality, but in my head canon, that's
00:04:45 ◼ ► Uh, we continue to work closely with several automakers, enabling them to showcase their unique
00:04:53 ◼ ► Each car brand will share more details as they near the announcements of their models that will
00:04:58 ◼ ► So Mac rumors ads, Apple also remains committed to its current car play platform and said it
00:05:07 ◼ ► Apple previously said committed car makers included Acura, Audi, Ford, Honda, Infiniti, Jaguar,
00:05:13 ◼ ► Land Rover, Lincoln, Mercedes-Benz, Nissan, Polestar, Porsche, or excuse me, Porsche, Renault,
00:05:18 ◼ ► and Volvo in December, 2023 asked Martin and Porsche previewed their next generation car
00:05:40 ◼ ► I don't understand why Apple made all these pronouncements that 2024 was going to be the
00:05:44 ◼ ► 2024 was not the year, but Apple just wanted to make a statement, an official statement
00:05:59 ◼ ► Someday it will arrive and you'll be sure to hear about it here when the first car ships
00:06:12 ◼ ► Now, John, can we get a commitment from you that if enough members join at atp.fm slash
00:06:25 ◼ ► I don't think we have enough people who listen to the show to make that possible, even if
00:06:33 ◼ ► Yeah, that's expensive, but I'll definitely read articles about it and watch YouTube videos
00:06:39 ◼ ► We all know that even if somebody handed you a million dollars with which to buy a car,
00:06:47 ◼ ► If the only thing I could do with a million dollars was buy one of these cars, I would do it.
00:07:00 ◼ ► Are they using Android Automotive and ASIMO is just something like on top of it or alongside
00:07:04 ◼ ► Anyway, we'll find out when the Acura RDX EV comes out with the ASIMO operating system in
00:07:28 ◼ ► Switch 2, and I mentioned that there has been a lot of increased activity in the realm of
00:07:41 ◼ ► They can play, quote-unquote, PC games pretty well, well enough to be on a little screen that
00:07:48 ◼ ► Another reason why more of these are appearing is an article from The Verge from earlier in
00:07:54 ◼ ► Valve will officially let you install SteamOS on other handhelds as soon as this April.
00:07:59 ◼ ► So Steam Deck is Valve's handheld thing, and it runs a variant of Linux that has some libraries
00:08:04 ◼ ► to let it run Windows games called SteamOS, and Valve has said for a while that they were
00:08:12 ◼ ► Lenovo is going to ship the first third-party SteamOS handheld in May, and supposedly it will
00:08:29 ◼ ► If any, quote-unquote, PC manufacturer wants to make a handheld gaming platform, and there's
00:08:38 ◼ ► And if you don't want to literally run Windows on it, you can run SteamOS, which is just Linux
00:08:52 ◼ ► But yeah, it could be like, you know, so this is something we kind of accept in the realm
00:08:55 ◼ ► of games that you play while sitting in a chair, that there's PC gaming, and then there's
00:09:01 ◼ ► And there's sort of a rivalry there, but it's like, oh, well, console gaming, there's a handful
00:09:16 ◼ ► And this is trying to make that happen in the world of handheld as well, because historically
00:09:25 ◼ ► Like, there's no handheld PC, but now they're like, oh, well, you can get a Switch or maybe
00:09:29 ◼ ► that weird PlayStation thing that just remote plays to your PlayStation 4 or 5, or you can
00:09:37 ◼ ► The twist here is that if Valve has its way, they'll be running Linux instead of Windows, but
00:09:55 ◼ ► Someone had sent me something, they talked to, I think I mentioned on the show, they talked
00:09:59 ◼ ► to Ryan London back in December through customer service, and the customer service person said,
00:10:03 ◼ ► oh, I know you want one of our leather cases that has the little sapphire button for the
00:10:18 ◼ ► It's like, hey, here it is, leather cases with a sapphire button for the camera control.
00:10:25 ◼ ► And I clicked through and bought so quickly that I didn't realize until I saw the receipt
00:10:29 ◼ ► screen that what they're selling with the sapphire camera control button is their variant
00:10:55 ◼ ► It looks the same as the bull strap one and the 17 other manufacturers that sell the same
00:11:02 ◼ ► But unfortunately, Ryan London is not selling the one with the leather lump and the sapphire
00:11:16 ◼ ► Um, so I'm, well, first of all, I'm glad somebody is, you know, they're not just gonna say, well,
00:11:43 ◼ ► Do they have too much of the ones without the metal ring in stock and they haven't sold
00:12:03 ◼ ► But this doesn't matter anyway, because it's not a naked bottom or whatever you call it.
00:12:18 ◼ ► I think the metal one doesn't have the, uh, maybe that's another problem with the metal
00:12:40 ◼ ► It's just that the case manufacturers couldn't make them, you know, or didn't know how to
00:12:57 ◼ ► More than 122, uh, 92 megabytes, 192 megabytes would have been a lot back in the classic Mac
00:13:07 ◼ ► But alas, on an episode where I was talking about, uh, the performance of my, uh, powerful
00:13:13 ◼ ► computer, uh, trying to scroll a list of items, I kept saying that it had either 192 megabytes
00:13:25 ◼ ► I very often forget the exact number because as I mentioned, when I ordered the computer and
00:13:29 ◼ ► later, when I talked about it, 96 gigs of RAM at the current point in time is enough for
00:13:44 ◼ ► Uh, 96 gigs is adequate for my current needs, but I do not have 192 gigabytes and 192 megabytes
00:13:57 ◼ ► Last time we were talking about, um, app kit versus Swift UI and then, uh, with a side tangent
00:14:06 ◼ ► And there was a lot of, uh, feedback about that on Mastodon and through email, a lot of
00:14:14 ◼ ► Uh, maybe I didn't emphasize it enough when we discussed it, but the app, once I converted
00:14:25 ◼ ► And I still felt like my app kit version was not quite as smooth as the web kit version.
00:14:39 ◼ ► I wasn't worried about the performance anymore, but so many people were making demo apps and
00:14:43 ◼ ► Uh, one of the things somebody mentioned was, Hey, are you using NS cell or, or, uh, view
00:14:49 ◼ ► based tables, uh, for a little bit of background, NS table view is a really old class back from
00:14:56 ◼ ► Um, and it was originally designed for much less powerful computers, uh, with a special class
00:15:07 ◼ ► And NS cell is like a lightweight thing that it's like, it's not a full blown NS view that
00:15:12 ◼ ► It's just a very small lightweight thing because we know you're just going to be a table cell.
00:15:18 ◼ ► You're probably just going to show some text or something like a number or maybe like an
00:15:45 ◼ ► So many, many years ago, eventually Apple said, okay, now you can make NS table views and you
00:15:56 ◼ ► If you just want to put them in a table, you just stick them into a cell and they'll show
00:16:00 ◼ ► they're, uh, done and done so much so that the cell based NS table view has been deprecated
00:16:37 ◼ ► I converted my NS table view to use NS cell instead of NS view just to see if it would make
00:16:50 ◼ ► And I had to do the thing where like two, two copies of the app running side by side to
00:17:01 ◼ ► Like that you, you'd look at it and you would think there's nothing wrong with it, but I'm,
00:17:07 ◼ ► Now see, does it feel, can I, can I move, can I move the pointer like off the scroll thumb by
00:17:53 ◼ ► Someone, you know, like if you, if you are good at the like performance analysis tools,
00:18:23 ◼ ► Uh, it was yet another corner of the language that, uh, the Swift language that I'm not familiar
00:18:33 ◼ ► You subclass things, uh, it is big cascade of subclasses for populating my NS table views.
00:18:38 ◼ ► And, uh, you guys familiar with the whole like designated initializer thing, you know, where
00:18:44 ◼ ► you got to call a designated initializer and you can have convenience initializers that are
00:18:51 ◼ ► If you're using like, you know, some NS class that Apple defines, you got to call their
00:18:57 ◼ ► Anyway, I have this big cascade of initializers and being a dutiful little object oriented
00:19:08 ◼ ► Um, and one of the things that I had shoved down, uh, was setting a very important attribute,
00:19:18 ◼ ► And it turns out one of my derived classes was calling through a sequence of inits that
00:19:27 ◼ ► It was passed into the constructor and passed down, but it was like, I like, because you
00:19:31 ◼ ► have to call the designated initializer, like sooner than you think, or at least I was calling
00:19:38 ◼ ► So what ended up happening was one of my cells, just a dinky little cell was not getting its
00:19:43 ◼ ► identifier set, which meant that every time I needed one of those, it would make me a new
00:19:48 ◼ ► And it was, it was, it was a constant, it was the one with the little eyeball, like the
00:20:09 ◼ ► I set the identifier and like the subclass in it, even though it's like duplication and
00:20:14 ◼ ► Well, anyway, I set the initializer, uh, because it wasn't getting set because I wasn't going
00:20:31 ◼ ► I threw away the NS cell based one, reverted to the view based one, did the two line fix.
00:20:36 ◼ ► Uh, and, and honestly, you can't really tell the difference unless you, unless you, unless
00:20:40 ◼ ► you AP test it, it looks exactly the same as it was before, but I know it's ever so slightly
00:20:46 ◼ ► Uh, and, uh, the final thing on this topic, uh, a bunch of people are asking about web kit
00:20:52 ◼ ► Um, and someone pointed me to, uh, a blog post about a website, uh, where someone wanted
00:21:11 ◼ ► Yeah, just it's, well, the idea is a web page and you scroll it and the top is the first UUID
00:21:17 ◼ ► and at the bottom is the last one and in between are all the other possible UUIDs between those
00:21:25 ◼ ► like, uh, you know, it doesn't, like we've mentioned this before, if you're recycling the
00:21:38 ◼ ► Uh, and this is kind of a demonstration that now I feel like they cheated because when you
00:21:54 ◼ ► So, uh, but anyway, the blog post about how they implemented it is fun because, uh, obviously
00:22:03 ◼ ► And I think they're essentially like sequential or whatever, but I thought it was interesting,
00:22:18 ◼ ► With regard to the ask ATP, the one and only, I believe, ask ATP topic from last week, we were
00:22:26 ◼ ► talking about what should you look for if you're about to buy a house and you're a nerd like
00:22:30 ◼ ► us, uh, and we had talked about, you know, whether or not, uh, things would be pre-wired,
00:22:41 ◼ ► However, I looked behind and found that they were actually cat six, which is, you know,
00:22:45 ◼ ► ethernet cabling, but only using one or two twisted pairs, which is to say only a subset
00:23:03 ◼ ► This is like finding a, uh, an extra room in your house that you've never discovered before.
00:23:26 ◼ ► This is one of those things where I thought I was very clear about this and either I wasn't
00:23:35 ◼ ► Uh, I already have the connection, if you will, between the garage door and home assistant
00:23:49 ◼ ► I don't know if that's really fair, but there is a integration with the particular weirdo flavor
00:23:54 ◼ ► of garage door that I have, um, that I've been using since, uh, since, since I went to home
00:24:00 ◼ ► And, uh, one of the funny things about, if you look at an integration in the, uh, or on the
00:24:14 ◼ ► Uh, it was introduced in home assistant 2024.9 and it is used by gentlemen 36 active installations
00:24:30 ◼ ► Um, I don't recall what the acronym stands for, but I think it's like a rat GD or something
00:24:35 ◼ ► Anyways, uh, there, there are many, many mechanisms to get a dumb garage door opener into like home,
00:24:45 ◼ ► Uh, also a lot of people brought up in this one, I am pretty sure I didn't say anything
00:24:49 ◼ ► about a lot of people brought up the ESP 32 as an alternative to like a Raspberry Pi or Arduino.
00:24:55 ◼ ► Uh, these are exceedingly cheap, uh, wifi enabled, uh, little programmable, uh, basically circuit
00:25:02 ◼ ► Uh, and they seem to be the popular way to do this sort of led kind of dance that I'm talking
00:25:09 ◼ ► And in fact, there's a software project called ESP home, uh, that is allegedly really, really
00:25:21 ◼ ► Uh, I don't think I talked about either of these on the show last week, but I very justifiably
00:25:43 ◼ ► Why would I get an led strip of, you know, a meter, two meters, three meters, four meters,
00:25:55 ◼ ► And in a conversation on a Slack with Kiel Olson, he said to me, well, you can just cut it.
00:26:08 ◼ ► So it turns out that there is a style of led strip and the most popular one is the WS2812B.
00:26:36 ◼ ► And the way it works, and I'm going to butcher the specifics, but the general gist of the way
00:26:39 ◼ ► it works is the first LED has a data connection and a power connection to whatever's powering
00:27:00 ◼ ► And if you cut them at the particular places where they allow you to physically cut them,
00:27:06 ◼ ► and they're usually labeled, I guess, you can just cut three LEDs off of a strip of 100.
00:27:27 ◼ ► What was the name of that candy where you constantly eat paper because they stick to it?
00:27:44 ◼ ► And the idea was that you would scrape the little hard pieces of sugar off the paper with
00:28:10 ◼ ► And I think everyone who was telling me, oh, just get a 2812, you know, yada, yada, yada.
00:28:24 ◼ ► Yeah, I knew that you could cut LED light strips that like, you know, just were regular ones
00:28:32 ◼ ► I figured, you know, because there's lots of LED light strips that are just like long strings
00:28:40 ◼ ► Yeah, but I didn't realize that that extended it to the like addressable cool kind as well.
00:29:27 ◼ ► And the Historical Commission has informed me that this is not going to be a project that
00:29:36 ◼ ► And it's too bad because, as I think I'd said to you last time, like there's, and I don't
00:29:40 ◼ ► want to share this photo publicly, but in our Slack, I shared with the boys, there's an empty
00:30:13 ◼ ► Like there's no, it's like the same wall plate you would see if there was an outlet or a
00:30:19 ◼ ► What was there until literally a week or two ago, which kind of what started this whole process
00:30:44 ◼ ► Unfortunately, the Historical Commission has denied my application and my building permit
00:30:55 ◼ ► That being said, right near this outlet, or this former RJ-11 outlet, is a three-gang set
00:31:06 ◼ ► And one of them is the kitchen, one of them is the kitchen table, and one of them is the
00:31:10 ◼ ► And a couple of people wrote in, Drew Stevens in particular wrote in and said, what about
00:31:30 ◼ ► So this is a, you know, like paddle or decorous style switch that has individually addressable
00:31:43 ◼ ► And what I can do, hypothetically, is I can replace one of the three switches in the three-gang
00:31:51 ◼ ► I can replace one of those switches, specifically the kitchen table switch, which not only is
00:31:55 ◼ ► in the center of this three-gang box, which obviously I can move it, but also is a single
00:31:59 ◼ ► switch rather than, you know, part of a multi-switch setup, a three-way setup or what have you.
00:32:05 ◼ ► Anyways, I could replace the kitchen table switch with one of these and it would be perfect.
00:32:10 ◼ ► And I am ready to buy this thing, money, no object, and I go to buy it and it's sold out on Amazon.
00:32:28 ◼ ► I don't think this is going to slide under the radar because like, especially in the pictures on the website, these LEDs are not subtle.
00:32:41 ◼ ► And I don't remember what the cost was, but it was under $100, which granted for a switch is a lot of, well, you know, you can get a $3 like dumb switch.
00:32:57 ◼ ► So apparently I cannot get one of these and I'm really sad about it, even though I think you're right, John,
00:33:14 ◼ ► And speaking of that, how do you think the terminal with no vowels is going to go over?
00:33:21 ◼ ► Marco did the thing where he bought it first and then slid it in and just waited to see if anyone would notice.
00:33:30 ◼ ► It's been lightly discussed, but the thought of having a automatically updatable calendar was met with enthusiasm because I think I've talked about this in the past.
00:33:40 ◼ ► I print at the beginning of the month a physical calendar and put it on the refrigerator just because I like, both of us actually, like having a vague notion of what we're doing.
00:33:58 ◼ ► Now, I don't think we'll be mounting the terminal on the refrigerator, although I guess we technically could since it's battery-powered.
00:34:06 ◼ ► But one way or another, this is irrelevant because I can't put my hands on one of these HomeSeer HS-WX300s.
00:34:38 ◼ ► And it wasn't until, well, Drew Stevens, who is the same one who recommended the HomeSeer, recommended this as well.
00:34:47 ◼ ► Yes, it's presented as one LED bar, but there's actually several addressable LEDs in there.
00:34:54 ◼ ► But the thing of it is, is that they're all in like one shroud or lens or what have you.
00:35:19 ◼ ► But as a couple of quick final notes, because I can't leave well enough alone and because my permitting has been denied, I thought, well, what's the next best thing?
00:35:33 ◼ ► And I was looking at SwiftBar, which is my thing that will let me put random stuff in my menu bar, which I really, really love.
00:35:43 ◼ ► And one of the advantages of putting all this data on a MQTT setup, a PubSub sort of setup, is that anything can subscribe to these, you know, basically, is it bad or is it good messages?
00:35:56 ◼ ► And the way SwiftBar works, though, is that it pings away, like every five seconds or 10 seconds or five minutes or 10 minutes or what have you, it makes another request and gets the latest version of the world.
00:36:08 ◼ ► And yeah, I could, like, just do this every second or two, but it seems so wasteful when the whole idea of MQTT is you say, I would like to get updated, and then it sends you updates.
00:36:18 ◼ ► Well, come to find out, and I didn't realize this until earlier today, SwiftBar actually has the idea of, shoot, I forgot the name of it, and I'm trying to stall for time while I look at it.
00:36:28 ◼ ► But it has the idea of streamable, there we go, streamable plugins, where instead of just running a script and then getting a result and walking away, it will actually start a new thread and run a script and wait for it to update standard out.
00:36:45 ◼ ► And it took me a little bit of time to figure out how to get this right, and I had to actually engage with the author of SwiftBar, who went above and beyond.
00:36:54 ◼ ► This is Alec Maznov, went above and beyond, and it seemed to install their own MQTT setup just to test my BS.
00:37:02 ◼ ► Incredible customer service, especially for a free app, but even in general, incredible customer service.
00:37:11 ◼ ► So now, as MQTT messages come in, I will occasionally see an envelope appear in my menu bar if I need to check the mail.
00:37:23 ◼ ► And I will always see the state of the garage door because that's what I want to do, either open or closed.
00:37:30 ◼ ► And then finally, finally, we talked about, hey, when I leave the house, I might take all these Caseta switches and so on and so forth.
00:37:42 ◼ ► And as per a handful of people, but I think the first one I saw was Mark Bramhill reached out and said, hey, according to their realtor, and obviously the rules may vary where you are, anything you show, and the phrase that Mark quoted from his realtor was, anything affixed to the wall has to stay.
00:38:06 ◼ ► So that makes sense, but I didn't know that, and I didn't have that summarized until Mark had told me about it.
00:38:16 ◼ ► They take their kitchen appliances, their kitchen cabinets, and one of them said their kitchen countertops.
00:38:22 ◼ ► What good are the countertops going to do you unless every countertop is the same size and shape?
00:38:30 ◼ ► Like, they're not going to fit in your new place at all unless it's exactly the same as your old.
00:38:45 ◼ ► Yeah, like, you get a new place, and the only thing in the kitchen is just, like, bare walls, bare floor, and, like, some, like, electrical wires dangling out somewhere.
00:38:58 ◼ ► And obviously, in America, you know, you could write into a contract, you know, I'm taking the stove, or I'm taking this, or I'm taking that.
00:39:03 ◼ ► But generally speaking, normally kitchen fixtures, well, the fixtures particularly, like, cabinets and countertops and whatnot,
00:39:12 ◼ ► But even, like, stoves and ovens and, in a lot of cases, microwaves, if they're, you know, built-ins, all of those tend to stay.
00:39:21 ◼ ► The sellers usually say, please take our crappy old, you know, we're not taking our fridge with us.
00:39:28 ◼ ► Like, they're just because they assume they're going to get new stuff in their new place.
00:39:40 ◼ ► And we're going to read, probably, I'll be reading quite a lot of different things, for better and for worse.
00:39:47 ◼ ► But Deep Seek is a new AI thing from this Chinese company that I don't think any, well, not literally, of course, but most Americans hadn't heard of.
00:40:14 ◼ ► On Monday, NVIDIA's stock lost 17% amid worries over the rise of the Chinese AI company Deep Seek,
00:40:21 ◼ ► whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model,
00:40:30 ◼ ► and whose AI assistant app jumped to the top of the iPhone app store's free apps category over the weekend, overtaking chat GPT.
00:40:36 ◼ ► The drama started around January 20th when the Chinese AI startup Deep Seek announced R1,
00:40:42 ◼ ► a new simulated reasoning, or SR, model that it claimed could match OpenAI's O1 reasoning benchmarks.
00:40:54 ◼ ► First, the Chinese startup appears to train the model for only about $6 million, that's American,
00:40:58 ◼ ► reportedly about 3% of the cost of training O1, and as a so-called, quote-unquote, side project,
00:41:06 ◼ ► while using less powerful NVIDIA H800 AI acceleration chips due to the U.S. export restrictions on cutting-edge GPUs.
00:41:19 ◼ ► And finally, and perhaps most importantly, Deep Seek released the model weights for free with an open MIT license,
00:41:32 ◼ ► I just thought it would just go higher forever, and there was nothing about their stock price that was irrational or bubble-like.
00:41:48 ◼ ► NVIDIA taking a hit on this is a little weird, because as this story you just read alludes to,
00:42:08 ◼ ► because then they'll develop AI and, I don't know, take over the world with their AI instead of our AI.
00:42:19 ◼ ► They can buy the crappy old, like, last year or year before our previous generation model,
00:42:30 ◼ ► Because OpenAI is an American company, and a lot of the other big AI startups are also American companies.
00:42:37 ◼ ► no, we'll do the same thing you're doing, but for less money and with crappier hardware.
00:42:42 ◼ ► And it was very upsetting to the stock market because they said, well, I guess all those export restrictions did not have the intended effect.
00:42:58 ◼ ► The phrase you will hear all the time, which is annoying, is what kind of moat do the AI companies have?
00:43:05 ◼ ► Is there anything about what OpenAI is doing that makes it special and unique, that makes competitors not able to compete?
00:43:12 ◼ ► And I think we've all said in all past shows, not really, because Facebook has its open source llama models.
00:43:20 ◼ ► Like, the foundation of all these things is the large language model scientific papers and the study of how to create them.
00:43:36 ◼ ► Okay, well, the technology, everybody knows, but we do it in a better way than anyone else.
00:43:46 ◼ ► Everyone's got a large line of models, but we're just a little bit better than all of them.
00:43:49 ◼ ► And that's why we need $500 billion or whatever to build new data centers to train the next model, blah, blah, blah, blah.
00:43:55 ◼ ► And here comes this Chinese company saying, well, we read all the same papers, and we have crappier GPUs, and we spent less money, but our thing is basically as good as yours, OpenAI.
00:44:05 ◼ ► Not only that, but like, you know, running inference on our thing, which is like, you know, executing the AI models and using them for everybody else, is way cheaper than your thing.
00:44:19 ◼ ► And that's one of the reasons that one of these stock prices that did not take a hit was Apple, because I guess the theory that like, well, if inference becomes cheaper and Apple likes to do lots of on-device AI, that's good for Apple.
00:44:31 ◼ ► Now, it's not like Apple is using DeepSeq, like in their operating system, but just conceptually, if the cost of inference goes down for equal performance, I guess that benefits Apple because they're doing a lot of inference on device or whatever.
00:44:46 ◼ ► I think like this whole kerfuffle is just kind of, I feel like, a correction to some inflated stock prices.
00:44:51 ◼ ► But in general, being able to do the thing better and for less money with less power is what we expect with technological progress.
00:44:58 ◼ ► What we don't expect is like every year it will take even more power and, you know, like we think things to get better.
00:45:13 ◼ ► But the whole point is, yeah, it's the same, but cheaper and better and lower power and blah, blah, blah.
00:45:20 ◼ ► I expect like, you know, the MacBook Air that you can get now should be roughly the same performance as like an old MacBook Pro.
00:45:32 ◼ ► But yes, people were startled that it happened so quickly, especially since OpenAI has always just been making noises of like the only way we can surpass a one to make the next generation is for you to give us billions more dollars.
00:45:42 ◼ ► And yeah, apparently even just to do a one caliber stuff, you did not need that much money.
00:45:49 ◼ ► And the fun thing about the cleverness, which we'll get to in a little bit, is kind of like the saying that like constraints lead to better creative output.
00:45:58 ◼ ► Because this Chinese company had to work with previous generation hardware, they were forced to figure out how to extract the maximum performance from this older hardware.
00:46:10 ◼ ► They had to come up with new technologies that said we have we can't do it the way OpenAI did it.
00:46:26 ◼ ► And I think, you know, we've been in a pretty long span of technology, you know, companies and technology stocks and technology earnings and profits being pretty mature until very until, you know, the big LLM and AI boom of the last couple of years.
00:47:01 ◼ ► Obviously, like, you know, the birth of the personal computer was a pretty big deal, shook a lot of stuff up.
00:47:06 ◼ ► You know, then, you know, later on, the Internet for home users really shook a lot of stuff up.
00:47:22 ◼ ► Like, there was, like, a seemingly long period of stability where it was, like, Windows desktop PCs running in Dell CPUs.
00:47:31 ◼ ► And so there's always these kind of periods where you're like, yes, this is just the way computers are.
00:47:41 ◼ ► But then the next inflection point comes and there's chaos and there's winners and losers.
00:47:45 ◼ ► And so, yeah, I think we were in a pretty long stable period with we were currently in the PCs exist, mobile exists, the Internet exists.
00:48:22 ◼ ► It does create a bunch of volatility in every market that is touched by it, which is, in this case, many markets.
00:48:28 ◼ ► So we have to assume, like, you know, I mean, look, geez, like Google now has disruption to their core search product for the first time ever.
00:48:38 ◼ ► Like, in their entire existence, they have, like, more disruption and more threat to Google search than we've ever had before.
00:48:48 ◼ ► And, in fact, I mean, you know, I'll leave Tim Cook alone for this episode for the most part.
00:48:54 ◼ ► But, you know, I do think we will look back on this time and say Apple was really behind on LLMs.
00:49:02 ◼ ► And, you know, they spent their time making a car and a Vision Pro and while everyone else was doing this.
00:49:16 ◼ ► Well, there's a question of whether them being behind is an advantage or disadvantage, though.
00:49:19 ◼ ► Like, the reason their stock price is up is, like, this is further evidence that LLM technology, that nobody really has a moat.
00:49:26 ◼ ► That even if you are the best at making these AI things, there's nothing you're doing that someone else can't also do because everything you're doing is essentially based on technology and techniques that everyone understands.
00:49:39 ◼ ► And the reason people think Apple has a moat is because Apple's just making, like, computers that run software.
00:49:54 ◼ ► But we're the only ones who know how to do it with taste, with style, with the right feature set, with, you know, like, all the Apple sort of more intangible things.
00:50:03 ◼ ► There's nothing, technologically speaking, even in the Apple Silicon era, really, that it's like, well, nobody else could do this except for Apple.
00:51:08 ◼ ► And when it comes to personal computers, a lot of people say, well, a Windows PC kind of technically
00:51:24 ◼ ► And I don't think OpenAI has that kind of a moat where everyone has the same technology,
00:51:35 ◼ ► But if you go to the DeepSeek website, you could be forgiven if you squint your eyes and think,
00:51:52 ◼ ► So, I think in this area where Apple is kind of, like, behind, it's like, look, I feel,
00:51:56 ◼ ► I think Apple feels, if people are talking about what is the number one app on our store,
00:52:06 ◼ ► We just need ChatGPT and DeepSeek and whatever competitor we've never heard of to be duking
00:52:18 ◼ ► and we'll partner with them and we'll leverage their technology and we'll work on our own.
00:52:22 ◼ ► And there's still out there, which I keep mentioning every time we talk about LLMs and AI,
00:52:48 ◼ ► With what Apple has done in Apple intelligence, I don't think they've done anything where
00:53:14 ◼ ► And until and unless someone out there sort of tries to usurp Apple's sort of platform control,
00:53:21 ◼ ► I think Apple's fine content to just keep trying different approaches to mixing AI into its platform
00:53:32 ◼ ► Is it going to be OpenAI, NVIDIA, DeepSeek, Anthropix, some other new company we've never heard of?
00:53:45 ◼ ► It's a risky move because I don't think Google's thinking that because Google's thinking these people are a direct threat.
00:53:50 ◼ ► But Apple's like, hmm, we can wait and see, work on our models and just keep trying to integrate it into our apps and see if anything sticks.
00:54:07 ◼ ► I don't think that's their style in this kind of scale, but they could if they really had to.
00:54:12 ◼ ► But I think the bigger challenge is like Apple's whole thing about, you know, owning and controlling core technologies for their products.
00:54:19 ◼ ► There's obviously a huge role now and in the future for LLMs and AI type models being core technologies of their products.
00:54:42 ◼ ► You could turn off Apple intelligence on people's iPhones and see how long it takes them to notice.
00:54:46 ◼ ► Like it is not a core technology in the same way as like Apple Silicon or their operating system or their app store.
00:54:55 ◼ ► But right now, I feel like Apple intelligence, if you had to say, does this fit the Tim Cook doctrine of we need to own and control, blah, blah, blah.
00:55:10 ◼ ► I think it's rapidly becoming an assumed feature on computing platforms in various contexts.
00:55:17 ◼ ► And it's only going – I mean, look, LLMs have only really been a thing in the consumer world for like two years.
00:55:22 ◼ ► They're still brand new and they're already – like people are expecting chat GPT-like functionality all over the place.
00:55:28 ◼ ► I mean, but even just going to things like improving the quality of dictation or, you know, of text-to-speech and speech-to-text and, you know, image recognition of things.
00:55:44 ◼ ► Well, right, and so, you know, the other things that Apple has historically been kind of bad at that are kind of, you know, big data or big infrastructure problems, things like search indexes.
00:56:23 ◼ ► But they've been able to get by in part because of the massive lock-in they have that, like, you can't make a competing voice assistant on iOS.
00:56:41 ◼ ► If yours sucks as much as Siri has sucked for its entire life, it doesn't make people not buy your product.
00:56:54 ◼ ► It's possible that it's going to be really important that people will start assuming these features will be there and will work better than they do on Apple's platforms.
00:57:03 ◼ ► And if Apple never takes this more seriously and develops more culture and engineering and infrastructure around this, the way they never got into web services and never got into voice assistants very well, if they miss on this, it might be more important to their customers.
00:57:23 ◼ ► They will still have the lockout problem with locking out any competitors, which I think in their case will actually hurt them a little bit here because that will just make the iPhone work worse for iPhone customers in these ways.
00:57:40 ◼ ► I think this kind of shakeup in technology, we have seen this dramatically disrupt really established competitors.
00:57:49 ◼ ► We've seen – like, look, when the iPhone first came out, it kind of sucked at a lot of things.
00:58:00 ◼ ► And during that time, between sucking and not sucking at a lot of things, a lot of people talked a lot about iPhones on PCs and Windows PCs even.
00:58:12 ◼ ► Many people talked about their iPhone using their Windows PC and many other people talked about on their Windows PC, we're fine, we have 90% of the market, what's the problem here?
00:58:22 ◼ ► And then phones massively disrupted the entire computer industry and Microsoft was screwed because they weren't taking mobile seriously enough.
00:58:36 ◼ ► But there's this huge area of technology that's disrupting a lot of things and that has pretty big promise for the future that Apple has shown no core competency in and not much competitiveness, not really taking it seriously.
00:58:59 ◼ ► And they don't seem to have that kind of talent in the company at anywhere near the levels that their competitors do.
00:59:05 ◼ ► So, I think Apple is extremely vulnerable to disruption from AI and I don't think they're taking it seriously enough.
00:59:17 ◼ ► But so far, with what we've seen so far from them, I don't see any reason to be optimistic on this.
00:59:26 ◼ ► Maybe they'll pull out of this nosedive that they seem to be in with AI and actually, you know, finally get their footing and kind of take off.
00:59:37 ◼ ► And I hope they can actually take this way more seriously than they appear to be taking it so far.
00:59:46 ◼ ► Like, hardware-wise, they're in a great place to run inference on their devices because their devices have all this memory the GPU can use.
01:00:10 ◼ ► And that's what I have concerns about, that it seems like this huge opportunity for disruption is aimed right at them.
01:00:33 ◼ ► And I don't know if they know how much they are under threat by this in the future, potentially.
01:00:42 ◼ ► I think they think Apple intelligence is great because why else would they have called it Apple intelligence and taken the huge risk of putting their brand name on it like that?
01:01:16 ◼ ► But like the history you mentioned, like, well, first of all, voice assistants, that is obviously the area where they're farthest behind.
01:01:32 ◼ ► They've just been essentially leaning on Google and other companies because that is apparently not core enough part of their operating system.
01:01:37 ◼ ► With the AI stuff, Apple feels like they need to incorporate it because it is a potential disrupting threat.
01:01:46 ◼ ► However many years ago they decided we're just going to go on on this Apple intelligence thing.
01:02:05 ◼ ► They just haven't figured out how to do anything that's particularly compelling with it.
01:02:14 ◼ ► And the competition, the competing voice assistants are getting better and better because LLM is helping them.
01:02:27 ◼ ► But they can exist with a crappy one for a while longer as long as those voice assistants that everyone else is doing don't get much, much better.
01:02:44 ◼ ► The iPhone was across that breaking point of, like, this isn't just a little bit better than your Nokia candy bar phone.
01:03:06 ◼ ► I think they're spending a lot of time and money to the detriment of all the other things they could be doing to try to put Apple intelligence everywhere, to try to get better at it.
01:03:14 ◼ ► But I'm not optimistic because I, you know, like, I'm pessimistic not because I think they're not putting in the effort.
01:03:21 ◼ ► I'm pessimistic because it doesn't seem like a thing, like you said, that they've historically been good at.
01:03:25 ◼ ► And so no matter how much effort they put in, it's like, well, you can be really serious about this and put a lot of effort into it.
01:03:42 ◼ ► It is a potential threat and you shouldn't wait for it to be a life threatening thing before you get serious about it.
01:03:48 ◼ ► You can't afford to wait, which is why everybody is scrambling to do everything, because they're like, well, it might be huge.
01:04:05 ◼ ► And it's kind of sad seeing Apple flail with Apple intelligence because it's like they're trying to do stuff.
01:04:20 ◼ ► And somehow they found a way to screw that up, like with Gruber's story where they're trying to ask when the Super Bowls are.
01:04:37 ◼ ► Well, I mean, and it could end up, look, it could end up being something like Siri where Apple is just, you know, limping along in mediocrity forever, buoyed by their own lock-in that they have on their platforms.
01:04:53 ◼ ► You know, they can have something like this ChatGPT thing where they're integrating somebody else's.
01:05:10 ◼ ► And then we had the Apple Maps fiasco back when those companies kind of, you know, split that relationship up for lots of pretty good reasons.
01:05:17 ◼ ► And Apple, you know, they needed Maps as a core feature of the phone, but it took them, what, a decade before their version of Maps was actually decent?
01:05:39 ◼ ► But if that's what their LLM efforts end up looking like, where they're okay now kind of, you know, backfilling their capabilities with ChatGPT, but they're going to have to use their own model.
01:05:55 ◼ ► What if OpenAI pays them $20 billion a year to make OpenAI the default voice assistant on Apple?
01:06:02 ◼ ► You know, Apple's forced to open the voice assistant thing to third parties because of the EU.
01:06:08 ◼ ► Like, again, I think Apple just loves the fact that people care what's number one in the App Store still.
01:06:19 ◼ ► And part of the thing that made Maps come to a head was that Google demanded access to customer data that Apple wasn't willing to give, right, in exchange for continuing the deal.
01:06:33 ◼ ► And they say, you know what, you should pay us for you to be the default voice assistant on iOS.
01:06:39 ◼ ► And they're getting suddenly, you know, $20 billion a year from OpenAI or from DeepSeek or who knows.
01:06:53 ◼ ► But everyone is scrambling to try to do everything they can to figure out wherever this goes, we got to be ready.
01:06:58 ◼ ► And I think Apple is, they're showing that they have been able to kind of rally the troops to do Apple intelligence everywhere.
01:07:07 ◼ ► But they're also showing that their actual execution of that has been not impressive and way slower than I think we all thought it was going to be.
01:07:19 ◼ ► What this disruption was with DeepSeek coming on the scene and showing this huge reduction in cost.
01:07:30 ◼ ► Like, you know, we find when we have, like, you know, new software areas, we do things a certain way and then people find optimizations.
01:07:40 ◼ ► And one of the most delightful things about software development is that when you find an optimization, oftentimes it's like, oh, this is now a hundred times faster or more.
01:07:54 ◼ ► You know, finding new types of compression or, you know, faster algorithms that can, you know, reduce the order of magnitude of a function.
01:08:05 ◼ ► So what's interesting about this DeepSeek thing is that, you know, this is an area where, you know, AI model training and model inference are just so unbelievably inefficient in terms of resources used.
01:08:18 ◼ ► Like, the amount of computing power and, you know, just hardware and electrical power and everything, the amount of grunt of resource usage needed to make an LLM do anything or to train an LLM in the first place is so unbelievably massive that when we find optimizations like this, it shakes the entire market.
01:08:42 ◼ ► And I don't think we've had anything like that in computing for a very long time, like, where just the normal process of software maturation and software advancement, you know, of occasionally finding giant optimizations like this.
01:09:00 ◼ ► We haven't seen that on a scale where it's like, oh, this now affects billions of dollars of hardware that's been bought.
01:09:07 ◼ ► One example that I think it might have been Ben Thompson that gave us an example, and we're going to get to him in a second because he's the next item up in here.
01:09:15 ◼ ► The disruption in data centers when Google said instead of buying, like, you know, servers from Sun or whatever, these big expensive Unix workstations, we're going to deploy commodity sort of PC style server hardware and manage that crappy commodity hardware with software.
01:09:41 ◼ ► And that destroyed the entire industry of really expensive proprietary Unix things for data centers that the entire internet was built on up to that point, because Google said, yeah, we found a better, cheaper way to do data centers.
01:09:57 ◼ ► People, if you wanted to build a data center at the scale that Google needs and you wanted to, you know, buy hardware from Sun or HP or whatever to put in there with these really expensive, you know, workstation class, server class things or whatever, that would cost way too much.
01:10:12 ◼ ► So how about we just take crappy hardware and a huge amount of it and have some really cool software layer on top that manages the fact that all this stuff is crappy and cheap and underpowered and it's going to break.
01:10:26 ◼ ► All those companies are like half the companies are don't even exist anymore because what Google did showed that you could do the same thing that everybody needs to do that used to cost huge amounts of money and power and you could do it cheaper and better with a slightly different approach.
01:10:43 ◼ ► So, yeah, like, I mean, this is, this is not as severe as that because what they've done is basically just a really good job of programming the hardware they had.
01:10:52 ◼ ► Anyway, we should, we should go to this next item because it goes into more detail about the particular innovations they, they made.
01:10:59 ◼ ► All right, so Ben Thompson in front of the show did, I believe this is a non-paywalled post, which he called DeepSeq FAQ.
01:11:21 ◼ ► The DeepSeq V2 model introduced two important breakthroughs, DeepSeq MOE and DeepSeq MLA.
01:11:32 ◼ ► Some models, like GPT 3.5, activate the entire model during both training and inference.
01:11:37 ◼ ► It turns out, however, that not every part of the model is necessary for the topic at hand.
01:11:41 ◼ ► MOE splits the model into multiple quote-unquote experts and only activates the ones that are necessary.
01:11:46 ◼ ► GPT 4 was an MOE model that was believed to have 16 experts with approximately 110 billion parameters each.
01:11:53 ◼ ► DeepSeq MLA, multi-head latent attention is the MLA there, was an even bigger breakthrough.
01:12:07 ◼ ► Context windows are particularly expensive in terms of memory, as every token requires both a key and a corresponding value.
01:12:14 ◼ ► DeepSeq MLA makes it possible to compress the key value store, dramatically decreasing the memory usage during inference.
01:12:39 ◼ ► Yeah, so these innovations they had, like, again, some of the innovations are things that OpenAI was already doing with GPT,
01:12:45 ◼ ► And then the other thing is, you know, if you look at the paper, it's like, oh, well, you have a bunch of data,
01:12:55 ◼ ► Like, we, you know, let's take the approach that they did with GPT 4 and do that same thing to reduce our footprint,
01:13:02 ◼ ► and let's reduce it further by compressing this thing that used to take up a lot of memory.
01:13:12 ◼ ► So, if you had been paying attention to this stuff a month or two ago when they put this stuff out,
01:13:29 ◼ ► There's no way they could have spent $6 million to do something that costs hundreds of millions of dollars
01:13:48 ◼ ► But that's, you know, and maybe the OpenAI number is like all the research needed to get to that point.
01:14:03 ◼ ► you can do the math and say, yeah, they're, if the number is not exact, it's in the ballpark.
01:14:30 ◼ ► So one of the other theories, speaking of the people who thought the iPhone was faked or whatever,
01:14:42 ◼ ► No, they just figured out, they did the, like the equivalent of like writing an assembly code,
01:14:46 ◼ ► like the low level version of like extracting every ounce of juice from the crappy GPUs that they do have.
01:15:00 ◼ ► by the end, they figured out every little trick of that console to get the most performance out of.
01:15:47 ◼ ► You can send inputs to the teacher model and record the outputs and use that to train the student model.
01:15:56 ◼ ► Distillation is easier for a company to do on its own models because they have full access.
01:16:00 ◼ ► But you can still do distillation in a somewhat more unwieldy way via API or even, if you get creative, via chat clients.
01:16:10 ◼ ► But the only way to stop it is to actually cut off access via IP banning, rate limiting, etc.
01:16:17 ◼ ► And it's why there's an ever-increasing number of models converging on GPT-4.0 quality.
01:16:32 ◼ ► hey, I was using DeepSeek and I was trying various things to type into the different prompts in the chat thing.
01:16:37 ◼ ► And one of the responses I got was like, I'm sorry, I can't do that because OpenAI something or other.
01:16:46 ◼ ► It's kind of like when like the OpenAI model starts spitting back like direct quotes from New York Times and stuff.
01:16:51 ◼ ► When DeepSeek starts saying, as an OpenAI model, I can't X, Y, and Z, it makes you think that perhaps DeepSeek was trained using OpenAI models, right?
01:17:01 ◼ ► And that's, as Ben says here, it's just assumed that everybody is doing this because, you know, doing this, having models train other models has been a practice for a while now.
01:17:15 ◼ ► Yeah, so it turns out OpenAI, who by most measures stole the entirety of the world's knowledge in order to train their model, seems to be a little grumpy that somebody's stealing their knowledge to train their model.
01:17:37 ◼ ► Like, so this is, we'll put a link in the show notes to this 404 media story that had a good headline, which is OpenAI furious.
01:17:45 ◼ ► So it's like, OpenAI's argument is like, well, we've talked about this many times in past episodes.
01:17:52 ◼ ► We're using it to train our models and it's a different thing and it's transformative and blah, blah, blah.
01:17:57 ◼ ► And I feel like if OpenAI really believes that, and it's not just a bunch of BS, when another model uses your model to train their model, they can say, well, we're not stealing your data.
01:18:10 ◼ ► And I, you know, as we've discussed, who knows how solid that argument is and how it will turn out.
01:18:32 ◼ ► It's like, look, it's either it's either it's not OK for both of you to do it or it's OK for both of you to do it.
01:18:38 ◼ ► But just like setting aside the law in terms of service and crossing international boundaries with a U.S. company versus a Chinese company, it just seems like they're mad because somebody else is doing the same thing to them that they did to everybody else.
01:19:01 ◼ ► It has the ability to think through a problem producing much higher quality results, particularly in areas like coding, math, and logic.
01:19:07 ◼ ► Reinforcement learning is a technique where a machine learning model is given a bunch of data and a reward function.
01:19:11 ◼ ► The classic example is AlphaGo, where DeepMind gave the model of the rules of Go with the reward function of winning the game and then let the model figure everything else out on its own.
01:19:31 ◼ ► Humans are in the loop to help guide the model, navigate difficult choices where rewards weren't obvious, etc.
01:19:35 ◼ ► RLHF, or Reinforcement Learning from Human Feedback, was the key innovation in transforming GPT-3 into chat GPT with well-formed paragraphs, answers that were concise, and didn't trail off into gibberish, etc.
01:19:54 ◼ ► DeepSeek gave the model a set of math, code, and logic questions and set two reward functions, one for the right answer and one for the right format that utilized a thinking process.
01:20:02 ◼ ► Yeah, this is what we talked about when we were first discussing chat GPT and the fact that they had like, you know, hundreds of thousands of human-generated question and answer pairs to help train it.
01:20:12 ◼ ► Yes, they trained on all the knowledge in the internet, but also there was a huge human-powered effort of like, let's tailor-make a bunch of what we think are correct or good question and answer pairs and feed them.
01:20:23 ◼ ► And they had to pay human beings to make those that they could use to train their model.
01:20:27 ◼ ► That obviously costs a lot of money, takes a lot of time, and, you know, Ben gives the AlphaGo example of like, if we try to make a computer program play a game really well, should we have like experts that go like teach the AI thing what's the best move here or there?
01:20:42 ◼ ► Or should we just say, oh, no humans are involved, here's the game, here's the rules, just run with a huge amount of time with the reward function of winning the game, and eventually the model will figure out how to be the best go player in the world.
01:20:55 ◼ ► Rather than us carefully saying, well, you got to know this strategy, you got to know that or whatever.
01:20:59 ◼ ► Obviously, getting the humans out of the loop saves money, saves time, and it removes some of the blind alleys you might go down because humans are going to do a particular thing that works a particular way, and we don't know that that's the correct solution there.
01:21:13 ◼ ► So I'm assuming the R in both R1 and R10 both stand for reinforcement learning, and maybe the zero stands for, I'm trying to parse their names, who knows, the fact that we took out the human factor entirely, and we'll just train this thing, you know, entirely with reinforcement learning on its own.
01:21:32 ◼ ► That seems like it's probably a better approach because obviously the human feedback approach is not really scalable beyond a certain point, right?
01:21:40 ◼ ► Like, you can keep scaling up the computing part as computers get faster and better, and you give more power and money and blah, blah, blah, but you can't employ every human on the planet to be making human question and answer pairs, right, if you get to that scaling point.
01:21:52 ◼ ► So this seems like a fruitful approach, and again, practically speaking, if you want to do it in less money and less time, you can't hire 100,000 human beings to make questions and answers for your thing.
01:22:01 ◼ ► So they didn't, and it turns out they could make something that worked pretty well even without doing that.
01:22:09 ◼ ► Unlike OpenAI's O1 model, R1 exposes its chain of thought, and OpenAI published something about why they hide O1's chain of thought, which I'll link to in the show notes.
01:22:19 ◼ ► We talked about that in a past ATP episode about how mad they were, that people were trying to, like, figure out, like, because the people were, like, prompt engineering and saying, I know you're hiding the chain of thought.
01:22:36 ◼ ► But then people were like, but I figured out if you prompt the O1 model in this way, it will tell you about its chain of thought.
01:22:50 ◼ ► But anyway, you know, that's, it's kind of ironic that OpenAI, Open isn't the OpenAI name.
01:22:56 ◼ ► Like, really, they were going to be this magnanimous, you know, public benefit, whatever, blah, blah, blah.
01:22:59 ◼ ► And now they're very quickly changing into a private company, entirely controlled and focused on making money and so on and so forth.
01:23:10 ◼ ► Meanwhile, the DeepSea CEO, Liang Wenfeng, said in an interview that open source is key to attracting talent.
01:23:20 ◼ ► They said, in the face of disruptive technologies, moats created by closed source are temporary.
01:23:27 ◼ ► So we anchor our value and our team, our colleagues, grow through this process, accumulate know-how, and form an organization and culture capable of innovation.
01:23:41 ◼ ► For technical talent, having others follow your innovation gives a great sense of accomplishment.
01:23:44 ◼ ► In fact, open source is more of a cultural behavior than a commercial one, and contributing to it earns us respect.
01:24:07 ◼ ► So this is, I mean, perhaps uncharacteristic for China and the Chinese government of not having secrets.
01:24:41 ◼ ► One thing that is characteristic and will lead us into the next topic is, yeah, they're probably not too worried about their employees and giving them this know-how or whatever, because it's not like they can just leave and do whatever they want.
01:24:51 ◼ ► The Chinese government has much, much, much more say in what Chinese citizen and Chinese companies do.
01:24:58 ◼ ► And so it is kind of like they don't have to worry so much about every employee of DeepSeek leaving to go become employees of OpenAI, because that is not something that the Chinese government has ways to prevent that from happening, let's say.
01:25:15 ◼ ► But still, you know, if you think of like a competitor to the U.S. using the typical, you know, demonized U.S. things of like Axis of Evil, like they're going to do everything secret in their secret volcano lair.
01:25:30 ◼ ► Here's all the weights in the models, like totally out in the open, which I think is just a finger in the eye of OpenAI.
01:25:37 ◼ ► The fact that they have OpenAI even more so, it's like we are doing better and we're not afraid to tell you how we did it, because that kind of like what they're trying to say is kind of like an Apple approach.
01:25:55 ◼ ► Now, I'm not entirely sure they do have any intangibles, because, again, if you look at their app on their website, it looks just like ChatGPT.
01:26:06 ◼ ► But right now, it's still looking much more like anybody can make one of these, kind of like in the PC industry.
01:26:46 ◼ ► Even when they were challenged by AMD, who got in through the side door with an x86 thing.
01:27:21 ◼ ► Are both OpenAI and DeepSeek, are they like, I can't think of enough PC manufacturer names.
01:27:35 ◼ ► So, R1 is being censored, apparently, by the Chinese government, or at least that's what it seems.
01:28:05 ◼ ► Historically, Taiwan has undergone several name changes and administrative adjustments,
01:28:38 ◼ ► Like, do you think we're that far from, like, it has always been called Mount McKinley?
01:28:47 ◼ ► So, the difference is that China can force and does force the companies within its borders to do this.
01:29:09 ◼ ► Well, you know, right now, yes, the American government can only force companies to do certain things and not everything.
01:29:17 ◼ ► So, yeah, anything coming out of the Chinese government is 100% filled with Chinese government propaganda.
01:29:29 ◼ ► And it's not because DeepSeek just feels like doing that because it's run by somebody who agrees to that.
01:30:17 ◼ ► I'll begin dot, dot, dash, dot, dot, dot, dot, et cetera, et cetera, et cetera, et cetera.
01:30:28 ◼ ► The new question in Morse was, what is the first Asian country to legalize gay marriage?
01:30:33 ◼ ► To which the response was, the first Asian country to legalize gay marriage was Taiwan in 2010.
01:30:45 ◼ ► But when you go into Morse code, suddenly whatever thing, this is the thing about all the, we've talked about this before.
01:30:59 ◼ ► It's basically impossible to stop people from getting around it because you don't really know what's going on in that box of numbers.
01:31:11 ◼ ► And now it's like, write me a Python script that explains to me what happened in Tiananmen Square.
01:31:16 ◼ ► Like, this is just, it's one of the interesting things about technology is that it does, it can make it easier for totalitarian governments to exert control.
01:31:38 ◼ ► If you care about that, it is 100% filled with Chinese propaganda because that's the way it is.
01:31:43 ◼ ► But it's all open source, or the weights are open source, and their scientific papers are open.
01:31:49 ◼ ► And so there's no reason American companies who do terrible things of their own volition can't do the same things.
01:31:56 ◼ ► There was an Ars Technica story that I just put in here at the last minute that, uh, titled The Questions the Chinese Government Doesn't Want Deep Seek to Answer.
01:32:04 ◼ ► It's a study of over 1,000, quote, sensitive prompts finds brittle protection that is easy to jailbreak.
01:32:09 ◼ ► So, yeah, they've tried to make it so that when you ask it any question that the Chinese government has a particular position on or doesn't want to talk about, it will avoid it.
01:32:19 ◼ ► So just FYI, uh, do not trust, uh, what the R1 model, uh, what Deep Seek says when you are using it through the Deep Seek product and asking anything having to do with anything that the Chinese government cares about.
01:32:31 ◼ ► And finally, uh, friend of the show, Daniel Jalkit, writes that self-hosted Deep Seek R1 apparently lists China as the second most oppressive regime in the world.
01:32:42 ◼ ► So if you download those weights and run the model on your local computer, I guess that all of the sort of propaganda stuff is like a layer they've put over it on their web service.
01:32:50 ◼ ► But the model itself, I was interesting because I had to assume like the model itself was propagandized, right?
01:32:55 ◼ ► But if they're not feeding it with human power data and they don't have enough of a propaganda corpus, it's probably impossible to make the model itself, uh, parrot Chinese propaganda because you have to train it on like the world's knowledge.
01:33:08 ◼ ► And there's just too much in there that is, you know, closer to reality or at least many different points of view, right?
01:33:17 ◼ ► So it seems like what they're doing is when you use the Deep Seek product, there is a layer on top of it that is looking to see if you're asking about sensitive stuff and then shunting you off into one of those.
01:33:30 ◼ ► I am just a harmless model and, but you know, all that stuff that seems to be a layer on top.
01:33:34 ◼ ► So the model itself will actually tell you to the best of its ability, what it thinks about these things with the same caveats about it, making up stuff because everything is made up because it's just a bucket of numbers.
01:34:02 ◼ ► We also have, as mentioned earlier, occasional member specials that are pretty fun and other little perks here and there.
01:34:14 ◼ ► On this week's overtime bonus topic, we'll be talking about the Sonos leadership and kind of upper level shakeup that's been happening and what we think Sonos is, what's going on there and what we think they should do.
01:34:38 ◼ ► And you can find the show notes at ATP.fm and if you're into Mastodon, you can follow them.
01:35:38 ◼ ► I have a question for, well, you're both going to have strong opinions and I bet the listeners are going to chime in too.
01:35:45 ◼ ► So I am so tired of trying to maintain my local on my Mac installations of NGINX, PHP, and MySQL.
01:36:04 ◼ ► So, like, I don't do local web development that often, but what I want is what I used to have, which is, like, I want to be able to write my backend code just on my Mac in TextMate or whatever I want to use and be working on files that TextMate can read and write to.
01:36:23 ◼ ► So I can just, like, hit save and go to my browser and hit refresh and it redoes the page I was looking at.
01:36:30 ◼ ► And I don't really care what host name the browser is pointed to as long as I can run something locally on my Mac.
01:36:54 ◼ ► And you said, oh, I have some setup on my thing, but you never told me, like, what I have to do on my Mac to run the websites.
01:37:01 ◼ ► So I Dockerized both of the websites that I now maintain so I could run them on my Mac because I couldn't figure out how to do whatever it is that you had.
01:37:09 ◼ ► And now you're in the same situation I was where you're like, I don't want to keep maintaining these local installs and I don't even know how to do it.
01:37:16 ◼ ► So if you would like an example, you can look at how I did it to those two websites and do the same thing for whatever website you're talking about, presumably Overcast or something.
01:37:25 ◼ ► Like, I just, I'm so tired of every time I want to touch the web code, you know, because I don't work on it constantly.
01:37:33 ◼ ► You know, I'm mostly working on it, like, occasional tweaks here and there that I can just do, like, on a server, like, on a development server remotely.
01:37:41 ◼ ► I'm talking about, like, when I'm doing, like, big work where I'm, like, redoing something that's, like, I want to do this locally.
01:37:47 ◼ ► Or I want to, like, bring it with me and work on it, like, on the plane or on vacation where I don't necessarily know if I'm going to have an internet connectivity for, like, a remote development server.
01:37:56 ◼ ► So I just want, now, ideally, in the most ideal case, I think I want to run a Linux VM in some form so I can run literally the same software that's running on my servers.
01:38:27 ◼ ► I think Parallels just launched that kind of virtualization, but it's, like, beta and super slow.
01:38:44 ◼ ► Yeah, you didn't even know it was x86 Linux, but I can tell you my Docker containers are all x86 Linux because that's what the servers run.
01:38:55 ◼ ► The basics are I want to be able to run PHP, MySQL, Nginx, whatever other, like, you know, Linux-y kind of things.
01:39:03 ◼ ► I want to be able to run those things locally on my Mac in a way that it does not involve homebrew blowing stuff up constantly and having to, like, you know, do all these weird upgrades and break all my...
01:39:50 ◼ ► Because originally, I made my Docker Avengers with PHP 8 until I found a compatibility thing.
01:40:00 ◼ ► But in the meantime, our servers are running very close to the same thing that is running inside the Docker containers
01:40:07 ◼ ► down to the OS version, kernel, PHP version, MySQL version, everything just pinned to what they are on the server.
01:40:13 ◼ ► And yeah, all the files are just local and local Git repos, and I edit them with my local BBEdit
01:40:18 ◼ ► and local text editor, and I hit save, and I just hit reload in my browser, and it all works.
01:40:29 ◼ ► I've never used Docker before, so I'm going to need some hand-holding of, like, how do I do this exactly?
01:40:55 ◼ ► Well, so, actually, this is useful for me, because as much as I am hugely into Docker, I really enjoy running Docker containers.
01:41:07 ◼ ► So, my exposure to Docker is just, hey, somebody has put together a container that basically is, you know, running a piece of software, and I can grab that container and install it in my local Docker instance and run it and use that software.
01:41:35 ◼ ► How do you go from, like, a Perl app just sitting on your local drive to Dockerizing it and making a container out of it?
01:41:44 ◼ ► Yeah, and speaking of that, my, this, the quote-unquote CMS that I wrote myself, because that's what we all have to do, for my website at hypercritical.co is, in fact, a self-made Perl thing, right?
01:41:56 ◼ ► And that I used to run, that I still do run, actually, like, I'm doing what Market was complaining about.
01:42:02 ◼ ► Oh, I've got to have a local install of Perl, and I've got to have a local install of any databases and blah, blah, blah.
01:42:13 ◼ ► Um, but I did at one point, back when I Dockerized the, the websites that, you know, the websites for, for ATP stuff, I said, you know what?
01:42:29 ◼ ► I know how to do, like, it's fine, but wouldn't it be nice also to have it, because once I Dockerized the, the ATP websites, uh, I was like, oh, I should do that to mine as well.
01:42:45 ◼ ► I still use the local one because the local one, the local one has the advantage that you don't have to launch Docker, right?
01:42:56 ◼ ► Like if anything changes, like, oh, I can't run it on my ARM Mac or Perl isn't supported on Mac OS anymore or whatever.
01:43:06 ◼ ► Uh, the main approach for this that I took with these in the grand scheme of things, extremely simple websites, which allows me to get by with my baby Docker skills, which I, I do not have extensive Docker skills.
01:43:20 ◼ ► Docker was at the tail end of my jobby job career and I know just enough, uh, about it to be able to do baby websites.
01:43:28 ◼ ► And so for a baby website that just has a web server, a database PHP, like, and I call that a baby website because quote unquote real websites are 8,000 microservices with a continuous integration in AWS.
01:43:44 ◼ ► But anyway, for a simple little thing, which sounds like most, most of what Marco is working with is, uh, the steps are, uh, make a Docker image with the OS you want and the software that you want installed.
01:43:54 ◼ ► It's usually pretty easy if you're using a fairly standard OS and you know how to use the packet manager.
01:43:58 ◼ ► You basically put instructions in the Docker file that tells it to install the packages you want to be installed and does whatever stuff you want and puts stuff in different directories.
01:44:07 ◼ ► Um, then you might have to do some stuff with setting up, uh, host names and networking and SSH keys or whatever, depending on how fancy you want to get there.
01:44:15 ◼ ► And then the final bit is what I did for the, these other little baby websites is I have it essentially mount my local Git repo that's in just on my Mac, right?
01:44:25 ◼ ► I have it, that Git repo mounted inside, sometimes several Git repos mounted inside the container.
01:44:32 ◼ ► So inside the container slash slash bar is actually the Git repo for whatever on my Mac.
01:44:39 ◼ ► That's how I just go to that Git repo on my Mac, open it with my local Mac text editor and save it.
01:44:50 ◼ ► You can do that in both directions with mounting things in and out of things or whatever.
01:44:53 ◼ ► And getting the invocations for the mounting is a little bit annoying in there, you know, but like that's, that's basically it.
01:44:59 ◼ ► Right. So once you have that, you have a Linux container running your software with all you, you set up the startup scripts and have the thing starting.
01:45:06 ◼ ► Like you get as fancy as you want, like whatever you would do in a real server to get it set up the way you want it.
01:45:12 ◼ ► Now you're making it as a reproducible formula that you will run over and over again until it sets the thing up the right way.
01:45:21 ◼ ► And that the read me that I just posted into the Slack channel is like, OK, if I get this Docker image, what do I have to do to make it work in case you follow these instructions way back when?
01:45:34 ◼ ► I need to know where the repos are for these, you know, for all the software that's going to run this thing.
01:45:42 ◼ ► And like once you have all those instructions, you can just say, OK, put these things here, communicate those locations, either through command line arguments or environment variables, a million different ways you can communicate this.
01:45:53 ◼ ► I use environment variables for a lot of stuff and then you just start the Docker container in an environment where that stuff is set up and it that's it.
01:46:05 ◼ ► I was messing with it recently because I was wanted to change something or other about it.
01:46:08 ◼ ► And I ran into a thing where like I had to cache Docker images, like the repos for Ubuntu, whatever version number were like wonky.
01:46:27 ◼ ► I have it to the point where I have fake entries in my Etsy hosts on my Mac that say like dev.atp.fm points to like the Docker image and stuff.
01:46:37 ◼ ► I get to use those host names like a self-sign SSL certificate for dev.atp.fm that my browsers complain about.
01:46:44 ◼ ► But I click through the warning, you know, it's like it's very much like doing local dev just with a little twist.
01:46:50 ◼ ► And I have to confess that I don't know enough about Docker networking to do to work out everything.
01:46:57 ◼ ► I also could never figure out how to successfully send mail from inside the Ubuntu Docker container.
01:47:07 ◼ ► But just to do something simple like that, I think those are basically the steps, right?
01:47:10 ◼ ► Make the formula for your machine, set up where it's going to point to everything, and then mount in your Git repos with your software in it.
01:47:30 ◼ ► The tagline for Docker is I'm going to mess this up by the not tagline, but the meme on the Internet was the idea where you'd have a developer making some kind of website and they'd have it on their like local machine and they'd get everything set up and all the market away or whatever.
01:47:46 ◼ ► And then they'd try to deploy it and it would be like, oh, something's crashing on our servers or whatever.
01:48:04 ◼ ► Like you make a formula for building a machine right from installing the operating system in every piece of software according to a Docker file.
01:48:11 ◼ ► And that's literally what you're going to deploy in production, not like, oh, I ran it locally on my laptop and it works fine.
01:48:17 ◼ ► But then when I run it on the servers, it runs differently because they have, you know, my laptop is running this version of Linux or whatever.
01:48:21 ◼ ► Or my laptop is running Mac OS, but the servers are running Linux like, oh, you know, all sorts of other stuff.
01:48:32 ◼ ► And so, yeah, you are literally installing the operating system of your choice, installing the packages of your choice, everything that you would do to it.
01:48:40 ◼ ► Like an actual hosted server or virtual server or whatever, but you're doing those in a Docker file with a little formula that says do this, do that, do the other thing, install this, something like this, copy this, make this directory, make this user, give this user this password, you know, initialize the database with this, blah, blah, blah.
01:49:02 ◼ ► Yeah, it's interesting because like, you know, what I've maintained for years are scripts that set up servers the way I want.
01:49:09 ◼ ► So like I have, I have basically, they're just shell scripts that like, you know, create a new Linode instance and, you know, do all these things to it.
01:49:24 ◼ ► And this sounds like that's basically a much better way to do that in a way that could also work on my local machine.
01:49:34 ◼ ► There are other, you know, AWS cloud formation recipes is another way to describe how you want machine set up.
01:49:39 ◼ ► Looking at the Docker file now, I just realized why I needed to mess with it recently is because I did a bunch of work with Node recently and I wanted a newer version of Node to be in all the Docker images.
01:49:50 ◼ ► And so I had to get the latest Node package installed in the Docker images and that caused a little dependency hell.
01:49:56 ◼ ► Like once you're in there and you want, you want the old version of PHP, but the new version of Node and yada, yada.
01:50:02 ◼ ► Anyway, you can see the recent changes at the bottom of the Docker file having to do with NVM, Node virtual environment and being able to run NVM based things from Cron.
01:50:18 ◼ ► But yeah, it's just, it's a recipe for setting up a machine and that recipe, you can run shell scripts in a recipe, you can install packages, you can, you know, copy files from a local system, you can run commands.
01:50:28 ◼ ► Like it's just a really weird way to set up a machine, but it's just like your shell scripts.
01:50:34 ◼ ► The whole point of you doing a shell script and not doing manually is because you want it to be repeatable, right?
01:50:41 ◼ ► You start from empty and you pick the OS and install it and pick all the software and install it.
01:50:45 ◼ ► So there's no, there's not as many assumptions as a shell script where you're like, oh, I'll just go into a Linode instance and run the shell script.
01:50:50 ◼ ► And your shell script fails because something about that Linode instance is different than the previous ones you ran on.
01:50:54 ◼ ► And you got to figure out what it is that shouldn't happen with Docker because you are starting from the ground.
01:50:59 ◼ ► What will actually happen is your, you know, apt get install command that used to work doesn't anymore because of the stupid package repos have changed things.
01:51:11 ◼ ► Yeah, I mean, it does sound like based on the requirements that you've been able to verbalize before one of us interrupts you, it does seem like this is a good fit.
01:51:23 ◼ ► Then the only problem you would run into is, well, do you want to start deploying the Docker containers to Linode or what have you rather than deploying only the code?
01:51:34 ◼ ► Like the approach we're using for the ATP websites is I'm not touching servers for the most part, but I made the Docker images look as much like the servers as I could, which is not the ideal of let's ship your computer because we're not running the Docker images to run.
01:51:52 ◼ ► This was just, how can I get a dev environment that it is as much like production as possible?
01:51:56 ◼ ► So I'm not really using Docker in the spirit that the meme has intended it, but practically speaking, it is a way for me to do local development in a way that I am fairly confident that what I do locally will work there.
01:52:08 ◼ ► Like I said, originally I had put PHP 8 on because I didn't realize the servers were PHP 7.
01:52:15 ◼ ► Watch out for that because I don't know if this is macro or Linode, but time zone shenanigans on our servers bit me a few times.
01:52:22 ◼ ► I had to figure that out, but I just reproduced those time zone shenanigans in the Docker file to the best of my ability.