00:00:00 ◼ ► So there's a handful of people that I'll schedule like a monthly FaceTime call with and in most of them, you know, almost all of them are, in fact, all of them are not local.
00:00:08 ◼ ► And then there's a handful of people that I try to do lunch with like once a month. And my good friend Sam, he and I had our monthly lunch today and we went to a place, but I had a problem.
00:00:20 ◼ ► During lunch, there was music outside, which was good, but the Jack Brown Burger Joint trolled me because they were playing a fish album during the entire lunch.
00:00:36 ◼ ► And all I could do was think about how happy you would be if you were there or if you at least knew this was happening as it was happening.
00:00:45 ◼ ► In retrospect, I should have like FaceTimed you or something just to be like, "Listen to this junk." The worst part of all, the worst part of all, you could tell me what songs I heard and I would probably be like, "Sure?"
00:00:58 ◼ ► But there were a couple of songs that even I recognized as fish songs. Like, you know, not only did it have the vibe of fish, but I like had heard the songs before and recognized them.
00:01:09 ◼ ► And I forget which ones they were. The only one I know by name is Bouncing Around the Room and that was not it. But there was one, and I'm sure this is describing half a fish catalog, but where it was repeating the same phrase over and over again and it was very catchy.
00:01:26 ◼ ► I'm kind of proud of you that you recognized that it was fish because I'm not sure I could do that. I don't know any of their songs. Maybe I could pick it up based on vibe, but I don't think I've even heard that much. So like, you must, when are you listening to fish so much that you recognize songs?
00:01:41 ◼ ► I can give you a good heuristic, Jon. If you hear a song that you don't recognize, you don't think you've ever heard it on the radio before, look around the room and if the whitest guy in the room is slightly bopping his head to it...
00:02:02 ◼ ► It could be anything. I don't know. I think my chances of spontaneously recognizing fish, like you're at a restaurant and there's music playing in the background, spontaneously recognizing fish, I think my odds are very low. I guess I'd have to look for somebody with the little red blood cell pattern on their clothing and if they were bopping to it or something, then I could figure it out.
00:02:24 ◼ ► Now I know what that is. That's the one thing I can recognize. Marco taught me what that is and now I see it on people's license plate surrounds. I'm like, "Oh, one of them."
00:02:32 ◼ ► Anyway, the worst part, Marco, the worst part of this entire lunch and about the only bad part of this lunch, because I really do enjoy Sam so very much.
00:02:49 ◼ ► So we have a new member special. We have gone back to the well and we have done another ATP tier list. John, can you remind us all what is a tier list?
00:02:59 ◼ ► I can't remind you all because everybody knows what a tier list except for old people who listen to this podcast, but then they've also heard the specials before. So it's a tier list. You rank things, you put them in tiers. Multiple things can be in a single tier. The top tier is S. Why? Nobody knows, except somebody knows, but we don't really care. The point is it's better than A. It's a tier list.
00:03:22 ◼ ► And we graded all the iPods, or at least most of them, anyhow. And so I am pretty confident that we did a pretty good job on this. There was a little bit of horse trading involved, but I'm pretty happy with where we ended up.
00:03:34 ◼ ► We made a handful of people that we know very upset. And I'm sorry that you're upset, but we're right. So if you are curious to hear this tier list or any of the others, you can go to ATP.FM/join. And if you join even for about a month, but you should do more, then you can get to all of the member specials.
00:03:53 ◼ ► We've been trying to do one a month for what, like a year or two now? I forgot exactly how long it's been, but we've racked up a fair number over the course of the last several months.
00:04:01 ◼ ► There's a handful of tier lists. We do ATP Eats, among other things. There's a lot of good stuff in there and some silly stuff.
00:04:09 ◼ ► So ATP tier list. And if you are a member and you would like to watch the tier list happen, which is not required, but is occasionally helpful, there is a super secret YouTube link in the show notes for members where you can go and watch it on YouTube as well.
00:04:24 ◼ ► Please do not share that. It's the honor system, but you can check it out there as well.
00:04:32 ◼ ► When you go to the iPod tier list member special, look in the show notes, the first link will be the YouTube video.
00:04:36 ◼ ► I like these tier lists because they always, we always seem to, I think they reveal something about the things that we are ranking.
00:04:42 ◼ ► Something that we, at least I usually didn't know going in. You think, oh, you're just going to rank them and people are going to, you know, have controversies over which is good and which is bad.
00:04:50 ◼ ► But I think in the end, when you look at the whole tier list and you kind of look at the shape of it and how it's worked out and how contentious the choices would be, you learn something about it.
00:04:58 ◼ ► Like I think our connectors tier list was like that. And I think the iPod one turned out like that too.
00:05:02 ◼ ► And the reason we made some people angry is because we know a lot of really weird tech people with very specific and often very strange opinions specifically about iPods.
00:05:13 ◼ ► Like they have their reasons. At least most of them have reasons that make some sense. I think one of the things we learned, not to spoil too much, is that a lot of people have, you know, all the things that we put in tier lists, people can have personal sentimental reasons for.
00:05:32 ◼ ► And I think iPods, more than anything we've done before, like the people who had opinions, they swayed heavily into the sentimental.
00:05:39 ◼ ► Right? It was, you know, it was like, this was my first iPod. I really love this thing. Right?
00:05:45 ◼ ► Much more so than the past tier list we've done. So I think, you know, maybe the iPod at that point was the most personal product Apple had ever made.
00:05:54 ◼ ► Yeah. I mean, honestly, like I had a lot of fun with this one because like even though I hardly ever really used iPods because by the time I could really afford decent iPods,
00:06:06 ◼ ► it was only very shortly before the iPhone really took over. So I only really had a couple of years with iPods. But those couple of years, I really liked the iPods.
00:06:14 ◼ ► And this was actually fun. So, and for, you know, coincidence sake, I happen to have bought a couple of iPod Nanos off of eBay a couple of years back just to kind of play around with.
00:06:26 ◼ ► And I took them out the other night after we recorded this episode and charged them up, well, the ones that we'll accept to charge at least, charged them up and got to play around with the old iPod Nano.
00:06:38 ◼ ► And I will just say, I stand by everything I said on that episode. Everything. So feel free to listen and tell us how wrong we are.
00:06:47 ◼ ► And you too, listener, can pay us $8 a month to yell at your podcast player just a little bit more. So we encourage you to do that.
00:06:57 ◼ ► And by the way, our membership episodes are DRM free. And so if you happen to use an iPod to listen to your podcasts, we are fully compatible.
00:07:08 ◼ ► So you can pay us $8 a month to listen to our member content on an iPod if you actually have one. And you can honestly buy one on eBay for only a few months' worth of membership fee because they're pretty cheap these days.
00:07:20 ◼ ► Indeed. And hey, what would you listen to on an iPod if not a podcast? Well, you could listen to music. And you could listen to music on a U2 iPod.
00:07:29 ◼ ► And so Brian Hamilton wrote in with regard to the red and black colored U2 iPod. We were wondering, I thought we were wondering on the episode, certainly there were some mumblings about it on Mastodon afterwards, you know, how did they get to red and black for the color scheme of the U2 iPod?
00:07:43 ◼ ► And Brian wrote in to remind John and us about how to dismantle an atomic bomb, which was released November 22 of 2004. And the color scheme on the cover art for that album is red and black. Where were you on that one, John? Mr. U2?
00:07:59 ◼ ► Yeah, I remember it once I was reminded of it. I mean, here's the thing, like I said on the episode, it's not as if red and black became like the iconic colors of the band. This was one album that was released, you know, obviously at the same time as the iPod as part of a promotional thing like the U2 iPod.
00:08:14 ◼ ► The first U2 iPod was released in like October, and the album came out in November. So it's a tie-in, right? And then there were future U2 iPods, and they were also red and black, but at that point U2 hadn't released a new album.
00:08:23 ◼ ► So they're all just tied to this one album, but they have released a lot of albums and there were future albums, there were past albums, and I can tell you that this one and this color scheme did not become heavily associated with the band. But that's the reason. That's why they went with red and black because of the cover of the album.
00:08:38 ◼ ► Are you saying that as an assumption? I'm genuinely asking, are you saying that as an assumption?
00:08:41 ◼ ► No, once I was reminded, I'm like, oh yeah, that's why they did it. I mean, it's not a great reason, but I'm pretty sure it's the reason.
00:08:48 ◼ ► Fair enough. Max Velasco-Nott writes in that there's also another feature, and I'm using air quotes here, on the U2 iPod. Max writes, "The U2 iPods featured signatures of the band members on the backside. I was fine with the black-red color scheme, but couldn't stand seeing Bono and company on the back whenever I turned them over."
00:09:06 ◼ ► Yeah, I'd forgotten about that as well. I mean, obviously it's a shiny back end that doesn't show up that much, but if you really just wanted a red and black iPod and didn't care about the band, the signatures on the back kind of messed it up a little.
00:09:17 ◼ ► Indeed. Nikolai Bronvold Ernst writes to us with regard to the DMA and Apple's cut. Nikolai writes, "I really enjoyed your last show, $5.93, not a European lawyer. I'm also not a European lawyer, but I am a citizen in the EU and wanted to provide a single European's point of view.
00:09:33 ◼ ► The DMA has nothing to do with Apple's cut in the App Store or how much money Apple earns from selling their hardware. It only has to do with ensuring fair competition. Citizens' rights to freely choose services they want to use, without vendor lock-ins on interoperability, portability, and your own data, which we here in the EU believe belongs to the user."
00:09:52 ◼ ► A lot of people have written in to say this, but I think people get hung up with the idea when we talk about Apple's cut and how the EU is trying to control that and they're like, "The EU is not trying to tell Apple how much money it can make. It's just trying to do this other thing."
00:10:07 ◼ ► The reason it gets mixed up and the reason people send us these emails is because what Apple did to supposedly comply with the DMA while also trying to prevent competition is an application of fees.
00:10:24 ◼ ► The EU says you have to allow for competition and Apple says, "Okay, sure, we'll allow competition, but all of our competitors have to pay us an amount that makes it so they can't compete with us."
00:10:37 ◼ ► The cut we're talking about is not Apple's cut from its own App Store, like when you sell through the App Store you pay Apple some cut.
00:10:44 ◼ ► It's the cut Apple demands from the App Stores and the people selling through App Stores that are not Apple's own App Store, that are selling through third-party App Stores. Apple is using money, using fees to make the competition less competitive.
00:11:00 ◼ ► That's what we're talking about. I know it's confusing what we're talking about. Apple collecting its money or Apple having its fees and stuff like that.
00:11:06 ◼ ► I think maybe that's the source of the confusion. The other thing is that plenty of countries, including the EU, do actually tell companies that they can't make a certain amount of money on a certain thing that they do.
00:11:18 ◼ ► Someone wrote in to give us the example of credit cards, MasterCard and Visa, the two big credit card networks.
00:11:24 ◼ ► I think in the EU the fees they charge stores to process their credit cards are essentially capped. The EU has basically said, "Visa and MasterCard own the market. You can continue to do that, but you can't charge merchants any more than 0. whatever percent."
00:11:43 ◼ ► The EU has not done that to Apple. They haven't said to Apple, "Hey, Apple, you can't charge more than 10% in your own App Store." They haven't said that at all. They haven't said anything about what Apple can charge in their App Store. What they just want is more competition.
00:11:55 ◼ ► Apple is saying, "Okay, there can be other App Stores, but they all have to give us an amount of money that makes it unattractive." We'll see how that flies again.
00:12:04 ◼ ► The EU has not yet ruled on the core technology fee and all the other things that they're investigating. So far they've only ruled on the steering provisions about how Apple restricts the way apps in its own App Store can link out to third-party payment methods.
00:12:20 ◼ ► But we'll see how those other decisions come out in the coming months and years. I don't know how long this is going to take, but right now it's not looking good for the core technology fee. Let's say that.
00:12:29 ◼ ► Yep. We asked for mostly tongue-in-cheek, but we asked for Brexit-style names for Apple leaving the EU. Jared Counts was the first we saw to suggest I leave. Frederick Biorman suggested Axit and provided a truly heinous but hilarious, I presume, AI-generated image for this.
00:12:51 ◼ ► My personal favorite, though, was suggested several times. First we saw was from Oliver Thomas, "I quit."
00:12:58 ◼ ► "I leave and I quit." We had many more suggestions for these. I thought these were the top three. "I leave and I quit" are cute, but I kind of like Axit because it's as close to Brexit and the Ax thing.
00:13:09 ◼ ► The picture has an EU-themed Superman holding an axe and an apple, and yes, it does look AI-generated.
00:13:15 ◼ ► It's interesting how, due to the way the various AI models that we're familiar with have been trained, most people can now look at an image and identify it immediately as AI-generated based on the shading and the weirdness of hands and all sorts of other stuff.
00:13:30 ◼ ► It is kind of strange how quickly that happened. But anyway, I kind of like Axit, but I don't think we get to pick this name.
00:13:42 ◼ ► Well, within our little circle of podcasts, yes, but I don't see the New York Times running with Axit or "I quit."
00:13:57 ◼ ► Alright, someone anonymously wrote in with regard to CarPlay Audio, "We were wondering how CarPlay Audio worked, especially with the new CarPlay, and whether or not it was more like AirPlay 2 where it sends a big buffer or whatnot.
00:14:11 ◼ ► And so Anonymous writes, "Audio and wireless CarPlay is always over Wi-Fi. Buffered audio for CarPlay is basically AirPlay 2. Buffered audio is available without doing next-gen CarPlay."
00:14:22 ◼ ► Yeah, this was news to me, because I had speculated that it seemed like all CarPlay audio was always going over Bluetooth.
00:14:29 ◼ ► Wireless CarPlay, I think, actually creates a little ad hoc Wi-Fi network between the car and the phone.
00:14:34 ◼ ► And wired CarPlay sends all that stuff over the wire. I kind of assumed... wired CarPlay, it seemed like it does audio and video over the wire.
00:14:42 ◼ ► Wireless, it seemed like it was doing Wi-Fi for the audio, or for the video signal rather, but Bluetooth for the audio.
00:14:50 ◼ ► And apparently, this person wrote in who I think would know such things, and they said, "Nope, it's always over Wi-Fi."
00:14:57 ◼ ► So that, to me, first of all, is kind of good news in the sense that you can have improved responsiveness, you can have better reliability for the audio, because it's already going over Wi-Fi.
00:15:08 ◼ ► And so you can do all that with current CarPlay tech. You don't have to use the new CarPlay system.
00:15:13 ◼ ► The kind of sad and frustrating part is, then why do wireless CarPlay implementations out there in the world so often have just massively long buffers that make it really laggy and annoying? That's frustrating.
00:15:29 ◼ ► Alright, Kirk Northrup points us to a New York Times article with regard to using AirPods Pro as hearing protection. This is kind of a lot to read, but I think it's worth it, because this really distills down the summary, and we'll put a link in the show notes if you want to read it for yourself.
00:15:43 ◼ ► Reading from the article, "As you can see in the results, any claim that the AirPods Pro's 'adapt to transparency' or 'hear through' mode limits sound to 85 decibels does not prove true in our testing. The earbuds did bring the 105 decibels sound down to 95 decibels, which is a big improvement over using no hearing protection at all.
00:16:00 ◼ ► But that's adequate for only about 45 minutes of exposure under our simulated conditions. Keep in mind that noise guidelines are designed with the assumption that a person who has no other loud noise exposure throughout the day.
00:16:10 ◼ ► If you were previously exposed to loud noise levels through your work or hobbies, you would likely want to be even more careful when attending a concert on the same day.
00:16:16 ◼ ► The 'hear through' mode and the Bose QuietComfort Earbuds 2, which Bose calls the 'aware' mode, did a little better in our tests, limiting the sound to 91 decibels, a level of volume reduction that might be adequate for a two-hour concert.
00:16:28 ◼ ► As we swapped the earbuds for ear plugs and switched back and forth between the earbuds, 'hear through' and 'noise cancelling' modes, we were surprised to hear how much more enjoyable the show was when we used the AirPods Pro earbuds as hearing protection.
00:16:39 ◼ ► Using the AirPods Pro's adaptive transparency gave us, in essence, a quieter version of the unattenuated live sound. The guitars, drums, and vocals all sounded surprisingly clear, and our enjoyment of the sound wasn't lessened at all.
00:16:49 ◼ ► However, as our measurements predicted, it was still too loud. After about 10 minutes of listening, our ears grew fatigued.
00:16:54 ◼ ► Yeah, this is interesting. So what the Wirecutter did was run a test setup with an artificial ear that they can put these earbuds into and measure what gets sent through them.
00:17:10 ◼ ► I question whether these results are universal because, again, as somebody who's now watched, I think, three concerts, four concerts maybe, with AirPods Pro as my earplugs, I know what it feels like to have my ears blown out from a concert, like how it feels during and afterwards.
00:17:28 ◼ ► And when I use the AirPods Pro, it doesn't feel that way at all. It feels just like using earplugs, which is what I was doing before using the AirPods Pro.
00:17:37 ◼ ► When the Apple Watch measures the sound pressure hitting my ears, like when it indicates when you're wearing the AirPods Pro what it's doing, it caps at 85 decibels when they're being used this way.
00:17:48 ◼ ► And I have found, for whatever it's worth, like I have an SPL meter, because of course I do, and I have found the Apple Watch's sensitivity to be pretty accurate, although obviously it wouldn't be using the watch's built-in mic when you have AirPods Pro in.
00:18:02 ◼ ► How would it possibly be measuring the sound on the inside of your ear? Is there a microphone that's facing the inside of your ear?
00:18:11 ◼ ► So anyway, the point is, my experience actually using them, it really does not feel like I'm hearing a 95 decibel concert for three hours. It feels like what it says of 85.
00:18:25 ◼ ► Well, how loud was the concert outside of the ear? From your seat, did you look at the decibel meter? If I had nothing on, what would the level be?
00:18:32 ◼ ► Yes. So I did a couple of times where I would take the AirPods out and put them away so they turn off, and see how the watch measures the concert fully.
00:18:43 ◼ ► I don't remember exactly, but I remember it was somewhere in the high 90s, I think. So not quite as loud as there, so maybe the difference is that they were coming from 105 decibels, and they came in to 95, and I was coming from somewhere in the 90s down to 85, so maybe that's the cause.
00:19:00 ◼ ► Or it could just be differences in fit. I don't know exactly how good is the seal with their artificial ear setup compared to my actual ear. I don't know, there's no good way to know that.
00:19:10 ◼ ► So I think the conclusion to draw here is, first of all, what we kind of already knew, which is they provide some protection, suitable for occasional concert goers, not suitable if you're going to be working in a factory every single day.
00:19:25 ◼ ► There's different degrees of protection that you might need, this is not everyday protection, but also it probably varies a little bit between both fit and between what exactly you're actually listening to, like how loud is your environment.
00:19:39 ◼ ► Maybe it can't bring down 105 decibels, but maybe it can bring down 95 decibels. So obviously there are other variables here, so I think the advice that I would give remains the same, which is if you have really serious hearing protection needs, or very frequent hearing protection needs, get real hearing protection.
00:19:59 ◼ ► If you are an occasional concert goer like me, and you want basic hearing protection for occasional concerts, this is probably fine unless you are standing directly next to the giant PA speaker. Maybe you might need a little bit more protection.
00:20:13 ◼ ► But this seems fine to me, and every time I've used them I feel great afterwards, and my ears don't ring at all, and there's no fatigue, so it seems to be working. So maybe it just has a limit to how much it can work.
00:20:26 ◼ ► Apple is apparently using Google Cloud infrastructure to train and serve AI. This is from HPC Wire.
00:20:34 ◼ ► Apple has two new homegrown AI models, including a 3 billion parameter model for on-device AI, and a larger LLM for servers with resources to answer more queries.
00:20:44 ◼ ► The ML models developed with TensorFlow were trained on Google's TPU. John, remind me what TPU stands for?
00:20:50 ◼ ► Tensor Processing Unit? Something like that? We talked about the actual hardware on a past show, and how many billions of computations or whatever they do, and how many different operands are in each operation.
00:21:01 ◼ ► But yeah, I think it's like a Tensor Processing Unit or something like that. It's basically, so Google doesn't buy its GPUs from Nvidia and put them in. It makes its own silicon to do machine learning. It has for many, many years. This is not a new thing.
00:21:13 ◼ ► They're called TPUs, and that's what they're currently using to train Gemini and stuff. And if you pay them, just like you pay AWS or whatever, you pay Google Cloud, I believe they will rent you their TPUs and you can train your models on it. And that's what Apple did.
00:21:26 ◼ ► Indeed. Apple's AXLearn AI framework, used to train the homegrown LLMs, creates Docker containers that are authenticated to run on the GCP or Google Cloud something. What is that? Google Cloud?
00:21:46 ◼ ► Anyway, to run on the GCP infrastructure, AXLearn supports the Bastion Orchestrator, which is supported only by Google Cloud. This is a quote from their GitHub documentation.
00:21:57 ◼ ► "While the Bastion currently only supports Google Cloud Platform Jobs, its design is cloud-agnostic, and in theory it can be extended to run on other cloud providers," Apple stated on its AXLearn infrastructure page on GitHub.
00:22:11 ◼ ► Yeah, so this is, I mean, we didn't put this in the notes, but the rumors are that the deal between Apple and Google to use Gemini as part of iOS 18 as an option alongside a chat GPT, that deal is reportedly getting closer, but this is from the past of like, "Hey, Apple's got these models, the one that's going to be running on people's phones, or the various ones that are running on their phones, which are smaller, and the big ones, they're going to be running on their private cloud compute."
00:22:35 ◼ ► And these are Apple's own models, and they train them themselves. And how do they train them? They pay Google to use TPUs to train their models.
00:22:43 ◼ ► And so I feel like this is interesting in that Google, Apple's unfriendly relationship, let's say, with Nvidia continues, and their friendly relationship with Google continues.
00:22:56 ◼ ► It's kind of a surprise that Google didn't do the deal. Maybe the rumors are, I think we talked about this on a past show, that nobody's paying anybody for the OpenAI thing, whereas maybe Google wanted to be paid, so we'll see how this works out.
00:23:07 ◼ ► But yeah, there seems to be a cozy relationship between Apple and Google, because apparently Apple either doesn't have yet or doesn't plan to have fleets of massively parallel machine learning silicon that they can train their models on, but Google does.
00:23:22 ◼ ► We are brought to you this episode by Photon Camera, the ultimate manual shooting app for iPhone photography enthusiasts.
00:23:31 ◼ ► Whether you're a seasoned pro or just getting started with photography, Photon Camera is designed to give you all the tools you need to elevate your iPhone camera experience to new heights.
00:23:41 ◼ ► Photon Camera is a beautifully designed, easy to use, manual camera app for iPhone, perfect for both beginners and professionals.
00:23:48 ◼ ► You can say goodbye to confusing buttons and hidden gestures. Photon Camera is very intuitive and comes with a comprehensive manual to help you learn the basics of photography.
00:23:57 ◼ ► They've also just launched Photon Studio, which lets you use your iPad or a monitor connected to a spare iPhone for a big screen preview while you shoot.
00:24:05 ◼ ► It also allows you to favorite or delete images in real time, view metadata, and even zoom in to expect details closely.
00:24:12 ◼ ► Photon Enhance is the new powerful photo editor for iPad and Mac that's also now available.
00:24:18 ◼ ► Both Photon Studio and Photon Enhance are included free with your Photon Camera subscription.
00:24:23 ◼ ► And here of course is the best part. For our listeners, Photon Camera is offering an exclusive deal.
00:24:35 ◼ ► That's photon.cam/atp. Go there, photon.cam/atp to claim your discount and start exploring the power of manual photography on your iPhone today.
00:24:56 ◼ ► John, I hear that you have asked Apple for help and they have said, "You know what you need? You need a Mac Studio." Because why would anyone need a Mac Pro?
00:25:06 ◼ ► This went around, I think a week or two ago. Apple's got a page, apple.com/mac/best-mac.
00:25:13 ◼ ► And the title of the page is "Help Me Choose, Answer a Few Questions to Find the Best Mac for You."
00:25:18 ◼ ► And when this was going around, the first thing I did was launch this page and I wanted to go through the little wizard and answer a bunch of questions to see if I could reach the win condition, which is having this tool recommend the Mac Pro.
00:25:36 ◼ ► And the answer was very clear. And I was mostly telling the truth, but occasionally I would exaggerate to make sure I go on the Mac Pro path.
00:25:43 ◼ ► And I did not end up at a Mac Pro. It recommended a Mac Studio to me and a bunch of other people tried.
00:25:56 ◼ ► If you look at the source code, you can see that there's like a JSON file that defines the options for the endpoints.
00:26:04 ◼ ► And that JSON, it's not a JSON file, but a bunch of JSON. That JSON does not contain the Mac Pro.
00:26:10 ◼ ► It contains pretty much every other Mac that Apple sells, but there is no way to get to the Mac Pro because the Mac Pro is not one of the options.
00:26:22 ◼ ► No, this is Apple telling you that literally nobody wants this computer and nobody should have it.
00:26:26 ◼ ► We all agree on this show that the current Mac Pro is not a great computer, but it is a computer that exists.
00:26:31 ◼ ► And on top of that, there is at least one very specific reason why someone might want to use it.
00:26:40 ◼ ► If one of the questions had asked, "Hey, do you have a bunch of PCI Express cards that you need to use?"
00:26:45 ◼ ► If the answer to that is yes, it's literally the only computer Apple sells that you can do that on.
00:27:05 ◼ ► Now, again, I can understand saying, "Well, this is not a great computer and really honestly no one should really buy it."
00:27:11 ◼ ► But when you make a "help me choose" tool on your website, you should have all of the things as endpoints.
00:27:43 ◼ ► Everyone on Mass.net was saying, "Well, the people who need the Mac Pro know it and so they don't need to use this tool."
00:27:50 ◼ ► You could say the same thing about, "Well, the people who need an iMac know they want an all-in-one thing,
00:27:58 ◼ ► But the tool exists to lead you to whichever product that Apple sells is best suited for you.
00:28:29 ◼ ► Like, have a million options that regular people will click and they will lead them off that path and say, "You shouldn't buy this."
00:28:34 ◼ ► But if the person says three, or any number other than zero, you have to lead them to the Mac Pro.
00:28:44 ◼ ► But so I did the whole quiz trying to get to the Mac Pro before you said it wasn't an option.
00:29:11 ◼ ► Well no, because the question it asks is, "Do you do all your work in a single location or do you need to be portable?"
00:29:18 ◼ ► I said like the one desk option. The very top option where it's like I do everything at the same place on a desk.
00:29:29 ◼ ► Like, I didn't just get the Mac Studio. I got recommended the Mac Studio and the MacBook Pro.
00:29:33 ◼ ► Oh, I also got two computers. The MacBook Pro $4,000 configuration and the MacBook Pro $3,500 configuration.
00:29:41 ◼ ► Yeah, I don't know how you didn't end up with desktop because there must have been some question that's differentiating portability.
00:29:47 ◼ ► Obviously if you mention you ever need to take it somewhere, they're not going to recommend a desktop.
00:29:55 ◼ ► I like their comparison ones, like for the phones where it does columns and you can list all the features and scroll and see how they are different from each other.
00:30:01 ◼ ► This doesn't do that, but I do think it's very strange to not have a single one of your computers in there.
00:30:09 ◼ ► Remember when they were selling the trash can for years and years and really nobody should be buying that, right?
00:30:13 ◼ ► But if you needed whatever GPUs it came with, for a while it still did have the most powerful GPUs you could buy in an Apple computer.
00:30:21 ◼ ► And if you needed those GPUs and they had a tool that was asking you a bunch of questions, they should have had a question that said,
00:30:27 ◼ ► "Do you use Maya at Pixar and need this much GPU power and it will lead you to the trash can?"
00:30:32 ◼ ► I don't know. It's weird. Anyway, if someone at Apple knows why the Mac Pro is omitted from this tool, please tell us.
00:30:38 ◼ ► I'm sure it's the obvious reason, which is like, "Nah, no one should buy that." And we kind of agree, but you're selling it, so put it in the tool.
00:30:47 ◼ ► Even the very first day this Mac Pro came out, nobody should be buying it, let alone now.
00:30:52 ◼ ► It's not nobody. It is the only computer with slots. That's not a great reason for it to exist and it's not a reason for you to pay twice as much as the Mac Studio.
00:31:01 ◼ ► But especially since they don't support, I believe they don't support at all anymore, the PCI Express breakout boxes like they used to on the Intel things,
00:31:10 ◼ ► it's literally your only choice if you have cards. And that's one of the reasons they should continue to make it and do continue to make it, and they just never ask about that.
00:31:18 ◼ ► Yeah, it made me laugh quite a bit that nobody was coming up with the Mac Pro. I don't know. Maybe that's a feature, not a bug. Just saying.
00:31:27 ◼ ► Alright, for the main topic this week for your main course, we have a plethora of different AI-related topics.
00:31:37 ◼ ► I'm going to try to take us on a journey. We'll probably fail and that's okay, but basically this next section is AI.
00:31:45 ◼ ► Huh. That's a thing, isn't it? And so we start on the 17th of June, for what it's worth, with our friend John Voorhees at MacStories,
00:31:54 ◼ ► which is them saying, "Hey," the article is entitled, "How we're trying to protect MacStories from AI bots and web crawlers, and how you can too."
00:32:03 ◼ ► And it seems like both John and Federico are getting very wrapped around the axle with regard to AI stuff.
00:32:12 ◼ ► And I don't mean to imply that they're wrong or that's bad, but they are getting ever more perturbed about what's going on with AI crawlers.
00:32:21 ◼ ► And I mean, to a degree, I get it. So that was on the 17th of June. John says, "Here's how you can protect yourself from crawling."
00:32:27 ◼ ► And then on the 21st of June, Business Insider writes and says, "Oh, ha! Open AI and Anthropics seem to be ignoring robots.txt."
00:32:36 ◼ ► And if you're not familiar, if you have a web page or website, I guess I should say, where you control the entire domain,
00:32:44 ◼ ► you can put a file called robots.txt at the root of the domain. So, you know, it would be marco.org/robots.txt.
00:32:51 ◼ ► And any self-respecting and ethically clear crawler will start crawling marco.org, or whatever the case may be,
00:33:06 ◼ ► And if so, there's a schema, if you will, by which the robots.txt will dictate who or really what crawlers should or should not be allowed to crawl that site.
00:33:18 ◼ ► And it's by path. They can say everything in this directory, you shouldn't crawl, everything here you can crawl it,
00:33:36 ◼ ► Well, actually, very quickly, I apologize, I gave you the green light, now I'm giving you the yellow light.
00:33:41 ◼ ► Just very quickly, it's important to note that robots.txt has never been enforced in any meaningful way.
00:33:55 ◼ ► but there's never been any real wood behind the arrow or whatever the term of phrase is.
00:34:03 ◼ ► It is a scheme that people who agree to that scheme can use that scheme to collaborate and work together,
00:34:24 ◼ ► It is a website saying, please maybe follow these rules if you would, you know, but it is not a legal contract,
00:34:40 ◼ ► It is also not universally used and respected, and so, and I can tell you, I operate crawlers of a sort,
00:34:55 ◼ ► I just crawl the URL as the users have entered them or as they have submitted them to iTunes/Apple Podcasts.
00:35:01 ◼ ► What robots.txt advisories were originally for was not like, hey, search engines, don't crawl my entire site.
00:35:12 ◼ ► What they were for was mostly to prevent like runaway crawls on parts of a site that were potentially infinitely generatable.
00:35:21 ◼ ► So things like if you had like a web calendar, and you can just click that next month, next month, next month button forever if you wanted to.
00:35:28 ◼ ► And so a web crawler that like, you know, indexes a page and then follows every link on that page,
00:35:33 ◼ ► if it is hitting like a web calendar, it can generate basically infinite links as it goes forward or backwards in time.
00:35:40 ◼ ► So the main purpose of robots.txt was to kind of advise search engines, and it was specifically for search engines.
00:35:50 ◼ ► It was to advise them areas of the site that crawlers should not crawl, mostly for technical reasons, occasionally for some kind of privacy or restriction reasons,
00:35:59 ◼ ► but usually it was just like technical, like, you know, hey, don't get into an infinite loop, which was largely unnecessary
00:36:05 ◼ ► because the web crawlers eventually kind of figured out like how to limit things on certain sites
00:36:09 ◼ ► and they eventually made themselves more advanced and that wasn't really necessary anymore, even for that case.
00:36:18 ◼ ► Like I think an anti-decent crawler is not going to get into an infinite loop, but keep this out of your search index was, you know,
00:36:32 ◼ ► But the thing is like the whole idea of like, well, I don't want any bot to crawl this, it was so based on assumptions about search engines in particular, web search engines.
00:36:46 ◼ ► The current drama around trying to apply it to AI training, I think it's missing a lot of that context that when this kind of unofficial standard was developed,
00:36:59 ◼ ► it was all about web search engines and when you think about like how the web search engine dynamic has always worked with web publishers,
00:37:07 ◼ ► there was never really any official contract between anybody that said like, hey, Google, Bing, all the other search engines, you know, that have come and gone over the years,
00:37:16 ◼ ► crawl my page, go ahead, index it, go ahead, even though technically that is making a copy in your server's memory and might be some kind of copyright violation,
00:37:24 ◼ ► doesn't really matter because the purpose of this is going to help me, it's going to make people able to find my page through your search engine
00:37:32 ◼ ► and will direct people to my page and I will be able to have them there, make money, maybe have them subscribe to my site in their browser or whatever.
00:37:40 ◼ ► So there was that implied symbiotic trade-off that, okay, I actually as a site owner, I want search engines to mostly index my site
00:37:53 ◼ ► And so robots.txt was entirely in that context. It was never anything that was some kind of like legal contract that said you must obey my rules.
00:38:05 ◼ ► That really has never been tested until fairly recently. Like that was never really something that really ever came up.
00:38:11 ◼ ► I mean there were a couple of things here and there with like Google News and news publishers in certain countries and stuff,
00:38:15 ◼ ► but for the most part the basic idea of robots.txt was really just please, like that's it. Please do this or don't do this.
00:38:25 ◼ ► And even then it was often used in ways that harmed the actual customers using things or did things that weren't expected.
00:38:35 ◼ ► This is why I don't use it for Overcast's feed crawlers because if you publish an RSS feed and submit it to Apple podcasts,
00:38:44 ◼ ► I'm pretty sure you intend for that to be a public feed. And so I feel like it is not really my place to then put up an alert to my users to say,
00:38:53 ◼ ► "Hey, this person's robots.txt file actually says disallow star on this one path that this feed is in and so I actually can't do this for you."
00:39:08 ◼ ► And second of all, because of its intention and context as a standard for search engines, which I'm not, this doesn't really apply to me in my use.
00:39:17 ◼ ► And there were all sorts of things over the years too, like you could specify certain user agents like, "Alright, Googlebot do this, Yahoobot do this."
00:39:25 ◼ ► And that was also problematic over the years too because it disadvantaged certain companies if you just had bad behavior once.
00:39:33 ◼ ► Or if a site owner just had one bad thought about one of these companies once and then never revisited it or whatever, then that company was allegedly disallowed from crawling this site. Why?
00:39:46 ◼ ► Well, I mean it's not even that. It's like, for people who don't know the technology behind it, don't allow Googlebot.
00:39:53 ◼ ► The way you identify Googlebot is by the user agent string, which is part of the HTTP request, and anybody can write anything there.
00:39:59 ◼ ► And so all someone had to do was say, "I'm Googlebot," and then just write a script that slams a site.
00:40:04 ◼ ► And people are like, "Oh my god, my site's being slammed by Googlebot." No, it's not. It's being slammed by a thing that put that string into the user agent header.
00:40:12 ◼ ► It's just, there's no security, no authentication, and it's like email. And people forget this about email all the time.
00:40:25 ◼ ► I know in email there are technologies to try to make this better, but with HTTP headers, the user agent string, there's no security behind that.
00:40:32 ◼ ► So if you're making any decision based on the user agent, whether it's a decision to allow something with a particular user agent string or disallow something,
00:40:39 ◼ ► or you try to make decisions about, "I'm getting all these hits and I look at the user agent string and the user agent string is X, therefore it must be Google, therefore Google is bad,"
00:40:46 ◼ ► you have no idea who or what that is, especially with proxies and things bouncing around the web or whatever.
00:40:53 ◼ ► So just like robots.txt, it's all just sort of a politeness agreement and convention that only works when all parties involved are being honest and acting in good faith,
00:41:09 ◼ ► Right. And when you're looking at legalities or copyright issues, as I was saying earlier, none of this has really ever been tested
00:41:18 ◼ ► because the way it was being used, the deal between search engines and publishers, was mutually beneficial.
00:41:26 ◼ ► And publishers, for the most part, who were not bad business people, for the most part publishers really wanted for search engines to index their public content.
00:41:37 ◼ ► And their private content shouldn't be accessible to the crawlers, it shouldn't be exposed to the public internet if they want it to be private.
00:41:44 ◼ ► And so using robots.txt to try to say, "I want you to only use the content on my site for this purpose but not that purpose,
00:41:55 ◼ ► but I'm going to keep serving it publicly and making it available to any bot that comes around publicly, you just have to maybe be polite about it,"
00:42:09 ◼ ► Like right now, again, we haven't really had much of an agreement between publishers and search engines and other big aggregators before.
00:42:16 ◼ ► I think there have been legal cases about it, especially in the early days, because in the early days of the search engines,
00:42:21 ◼ ► and the idea that you would go to a website that's not yours and type in a search string and see text that came from your website on someone else's website,
00:42:29 ◼ ► on the Google search results page, I believe there were legal cases about that, and I think the result was that Google is allowed to run a search engine.
00:42:42 ◼ ► Especially the old style search engines, where what you'd see is a series of links that are search results and maybe a summary below them,
00:42:48 ◼ ► before Google started doing the thing where it's like, "Actually, I'm just going to give you an entirely unattributed snippet at the top of the page
00:42:54 ◼ ► that tries to give you the answer you were looking for without sending you to any site," and of course that snippet is now powered by their large language models, but before it wasn't.
00:43:01 ◼ ► That is still up for grabs, and we'll talk about that in a little bit, but the basic idea of a search engine that indexes the web and allows you to get links to the things that it has indexed,
00:43:12 ◼ ► I believe actually has been tested in court, and either way, whether or not it has been tested in court in your country or in the US or whatever,
00:43:19 ◼ ► practically speaking, I don't think there are many, as much disagreement about the utility of that.
00:43:26 ◼ ► People like having traffic sent to them by Google. There's arguments of Google being too big and there should be competition in the search space, but the concept,
00:43:33 ◼ ► conceptually, a web search engine, I think we all agree, is a good thing that is necessary and should exist and helps everybody.
00:43:41 ◼ ► Sure, but if you're going to start making qualifications of like, "Alright, well, here's how you have to use my content or not use my content,"
00:43:49 ◼ ► robots.txt is not the way to do that. That is not any kind of legal binding, that is not any kind of technical restriction.
00:43:56 ◼ ► I would even question whether it's even a good idea to even still have those files these days and to expect anything from them.
00:44:04 ◼ ► Well, I mean, I think what people are expecting is, we'll read this thing from the Perplexity CEO in a second, but like,
00:44:09 ◼ ► I think what people are expecting is for the ostensibly good faith actors to do what the existing ones do. Google honors robot.txt,
00:44:20 ◼ ► and so do the other things, Apple honors it with their Apple bot thing, so do the other things that crawl the web, right?
00:44:26 ◼ ► Nobody has to follow it, but the good faith actors do, and so I think most of the pushback here is,
00:44:32 ◼ ► "Hey, I thought you weren't just a random, you know, fly-by-night company or a bunch of script kiddies or whatever,
00:44:39 ◼ ► I thought you were a big, important, serious company, and you have a crawler that crawls the web,
00:44:46 ◼ ► and you should use a user agent that looks like you, right? And we won't ban your user agent when someone fakes it and spams our site with it,
00:44:55 ◼ ► but we'll just say, 'Here are the rules for you. You can't crawl these URLs, you can't crawl any of our URLs, you can't crawl these or whatever.'"
00:45:02 ◼ ► Like, I think this is a reasonable tool for that job, provided you understand that the tool only works if the people on the other end agree and say,
00:45:11 ◼ ► "Yes, we will honor your robots.txt," and I think part of the anger is the AI companies are not behaving the way the search engine companies did,
00:45:32 ◼ ► There are nice brick paths between the buildings. Those are the company-owned devices, IT-approved apps, and managed employee identities.
00:45:44 ◼ ► Those are unmanaged devices, shadow IT apps, and non-employee identities like contractors.
00:45:50 ◼ ► Most security tools only work on those happy brick official paths, but a lot of security problems take place on the shortcuts.
00:45:58 ◼ ► 1Password Extended Access Management is the first security solution that brings all of these unmanaged devices, apps, and identities under your control.
00:46:07 ◼ ► It ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible.
00:46:14 ◼ ► 1Password Extended Access Management solves the problems traditional IAM and MDM can't touch.
00:46:23 ◼ ► It's available now to companies with Okta, and it's coming later this year to Google Workspace and Microsoft Entra.
00:46:47 ◼ ► So we have a roundup from Michael Tsai that we'll link in the show notes that talks about all this.
00:46:55 ◼ ► Perplexity CEO Aravind Sarinivas responds to plagiarism, the plagiarism and infringement accusations.
00:47:07 ◼ ► "We don't just rely on our own web crawlers, we rely on third-party web crawlers as well."
00:47:20 ◼ ► So reading from the post, and this is a direct quote from the post but not from Aravind,
00:47:26 ◼ ► Sarinivas said, "The mysterious web crawler that Wired identified was not owned by Perplexity,
00:47:35 ◼ ► Sarinivas would not say the name of the third-party provider, citing a nondisclosure agreement.
00:47:40 ◼ ► Asked if Perplexity immediately called the third-party crawler to tell them to stop crawling wired content,
00:47:52 ◼ ► Anyways, Sarinivas also noted that the robot exclusion protocol, in other words, robots.txt,
00:48:00 ◼ ► He suggested that the emergence of AI requires a new kind of working relationship between content creators or publishers and sites like his.
00:48:06 ◼ ► So this is actually something that a bunch of AI CEOs and other bigwigs have been doing,
00:48:17 ◼ ► And it's like, come on, this is like CEO 101. Yeah, you outsource lots of things, right?
00:48:31 ◼ ► it's like saying, you know, "We outsourced to some company that makes our bread for us at our sandwich shop,
00:48:44 ◼ ► Or are you saying, you explicitly said, "It's okay if you put a little glass in the bread."
00:48:52 ◼ ► And so, and it's not just the perplexity of the CEO, I've seen like three or four stories where an AI CEO says,
00:49:02 ◼ ► It's like, "Wait, what? Like, just own it. Just say, 'We've decided we're not going to honor robots.txt.'"
00:49:07 ◼ ► Because everyone knows you're not doing it, and you can't try to blame it on a third-party thing, whatever,
00:49:14 ◼ ► And like I said, I think the pushback is not like, you know, they're legally required to do this or whatever,
00:49:28 ◼ ► And it's clear that because you're like an AI startup, you're like, "Yee-haw, cowboy time, it's a wild west,
00:49:35 ◼ ► you can't fence me in, we're not acting like Google because we don't have to, so tough luck."
00:49:42 ◼ ► There's no legal argument here, there's no like, it's just a decision that they're making.
00:49:47 ◼ ► And by the way, Marco, on your decision not to do it, I would say the closest analog of Overcast is that you're a web browser.
00:49:53 ◼ ► Web browsers don't honor robots.txt. If you type a URL into the address bar of your web browser,
00:50:05 ◼ ► So if you are a web client used by an individual user, like a user loads an RSS feed in a podcast,
00:50:12 ◼ ► that is a single person using a client application to browse the web, you know, to get an RSS feed, right?
00:50:18 ◼ ► That is very different than an automated crawler that is crawling all over the entire web and following links, right?
00:50:31 ◼ ► Overcast should not look at robots.txt, because it's not a robot and it's not being used by a robot.
00:50:39 ◼ ► Well, but if you look at what Perplexity is doing, it's, I think, a lot closer to a browser than a search index.
00:50:54 ◼ ► do people go there to get links out to other places or do they go there to get the answer that you attempt to attribute?
00:51:02 ◼ ► And I think people get angry with Perplexity when they provide an answer but then don't say where this answer came from.
00:51:06 ◼ ► And even if they do say where this answer came from, they're like, you provided too much content.
00:51:10 ◼ ► This is the same problem people are beginning to have with Google, is like, you're supposed to be sending me traffic.
00:51:17 ◼ ► By either giving an answer that's synthesized from a website and not telling them the source,
00:51:21 ◼ ► or basically like inlining the entire, inlining my entire webpage, for example, and saying, you don't need to go to that website.
00:51:35 ◼ ► But just at the crawling stage, people who are seeing their website crawled and they're going to the Perplexity's service and saying,
00:51:43 ◼ ► oh, I can find my content there and I put you in robots.txt and you shouldn't be crawling in this Perplexity.
00:51:48 ◼ ► It's like, we don't have to look at robots.txt because that's just an advisory thing and we've chosen to ignore it.
00:51:56 ◼ ► Well, and I think there's going to continue to be more and more applications over time of technologies like AI summarization and action models and things like that,
00:52:08 ◼ ► where some fancy bot, basically, is going to be browsing and operating a webpage on behalf of a user.
00:52:17 ◼ ► That is kind of like a browser, but it's a very different form that I think breaks all those assumptions with publishers.
00:52:28 ◼ ► Instapaper would save the text of a webpage to read later and only the text, not all the ads and the images and everything like that.
00:52:34 ◼ ► I was very careful, though, to not make features that would enable somebody to get the text of a page without having first viewed the page in a browser or a browser-like context.
00:52:53 ◼ ► And then they could save what they were seeing, and then part of that would be saved to Instapaper and shown to them later.
00:52:59 ◼ ► And that was always a very tense balance to try to maintain, because what I didn't want was widespread scraping of people's text without loading their ads,
00:53:12 ◼ ► but I figured that seemed like an okay trade-off, because that was literally just saving what was already sent to the browser and what the user was already looking at.
00:53:20 ◼ ► But a lot of these new technologies... First of all, I probably wouldn't attempt that today.
00:53:37 ◼ ► Suppose it's one of these action models where you're saying, "Alright, book me a flight."
00:53:42 ◼ ► This stupid book me a trip thing that all of these AI demos from these big companies keep trying to do, even though nobody ever wants that.
00:53:50 ◼ ► Suppose you have a book me a trip kind of thing with an AI model, and the idea is that model will go behind the scenes and will go operate Expedia or Orbits behind the scenes for you,
00:54:02 ◼ ► and manipulate things back there to find the best flights and hotels and whatever else.
00:54:06 ◼ ► Well, those sites make some of their money via ads and affiliate things and sponsor placements on those pages.
00:54:14 ◼ ► If you have some bot operating the site for you, kind of clicking links for you behind the scenes in some kind of AI context,
00:54:20 ◼ ► that bot is not going to see those ads, it's not going to click those affiliate links, it's not going to pick the sponsor listing,
00:54:26 ◼ ► it's going to just kind of get the raw data and that's it, and that will be violating those sites' business models if that happened.
00:54:37 ◼ ► So this really has not been challenged, this really has not been legally tested that much, this really has not been worked out,
00:54:43 ◼ ► like what are the standards, what are the laws, what are the legal precedents, how much of this is fair use versus not.
00:54:49 ◼ ► You know, for the most part, until very recently, we could pretty much just say, "Alright, if you serve something publicly via public URLs,
00:54:59 ◼ ► and anybody can just download it, then nothing bad would really happen to you and your business model for the most part
00:55:06 ◼ ► if some bot came by sometimes and parsed that page for some other purpose." It wasn't a big deal.
00:55:18 ◼ ► Now, with a lot of these AI products, and with Google search itself, you know, increasing over time and then more recently rapidly increasing,
00:55:27 ◼ ► what we're seeing now is full out replacement of the need for the user to ever look at that page.
00:55:33 ◼ ► That's a pretty big difference, and it's really bad for web publishers, and kind of, you know, then consequently, really bad for the web in general.
00:55:42 ◼ ► We have a pretty serious set of challenges on the web already, even before this new wave of LLMs came by to further destroy the web,
00:55:52 ◼ ► we already had a pretty bad situation for web publishers for lots of other reasons over the years.
00:55:58 ◼ ► To have something that removes the need for many people to visit a page at all, that is going to crush publishers.
00:56:07 ◼ ► And so it does make sense why everyone's freaking out about this. It makes a lot of sense.
00:56:11 ◼ ► I do caution people though, I don't think it's a very good business move, or a very good technology move to say,
00:56:29 ◼ ► And you can't actually block them anyway. Like, when it comes down to it, technically speaking, you literally can't stop them.
00:56:36 ◼ ► Unless you stop everyone from viewing your website, in which case you don't have a website.
00:56:39 ◼ ► Right. So I think it is wise to focus on trying to prevent uses of your content that remove the need to visit your page.
00:56:53 ◼ ► I don't think it's wise to say, "I don't want any AI training or any AI visibility of my page."
00:57:00 ◼ ► That, I think, is probably short-sighted, and probably a bit too much of a blanket statement.
00:57:07 ◼ ► And that, I don't think it's good for any party involved to have that kind of blanket ban on it.
00:57:15 ◼ ► I know. Well, what people want though, what people, the publishers in particular, want is, they want an ecosystem of members who do agree to some rules of politeness.
00:57:27 ◼ ► And say, "Look, we should agree on a system that lets me tell you that you shouldn't do X, Y, and Z on my site, and you should agree to it, and we'll feel better about you if you do that."
00:57:36 ◼ ► And part of the reason I think Instapaper, your example, was not a particularly big problem is, like you said, scale.
00:57:44 ◼ ► And anything with AI in the name these days, people flip out about it and think, "This is going to be as big as Google."
00:57:50 ◼ ► Instapaper was not as big as Google. Right? It did not have billions and billions and billions of users.
00:57:56 ◼ ► If it did, if Instapaper had Google scale, I bet there would have been a hell of a lot more scrutiny on even the very conservative things that you did.
00:58:03 ◼ ► But because it was small, it's not a big deal. That's part of the ecosystem of the web, is there's all sorts of small things that don't have particular big scale.
00:58:11 ◼ ► They're doing all sorts of weird stuff. Nobody cares about them. We allow them to exist. It's fine.
00:58:16 ◼ ► But now, these big names in AI, "AI is the next big thing," "You're an AI company," "You have a lot of funding."
00:58:21 ◼ ► Everyone looks at them and thinks, "That could be the next Google." "That could be the next thing with billions and billions of users."
00:58:27 ◼ ► So we better take whatever weird stuff they're doing way more seriously than we would take overcast.
00:58:31 ◼ ► Even with Google, the current giant in the world of search, and they're trying to replace sites and giving answers on the site or whatever.
00:58:40 ◼ ► Neil I. Patel coined a term, I think it was his, about this called "Google Zero," which is the point at which publisher websites get zero traffic from Google search.
00:58:48 ◼ ► Because it's been going down and down over the years because, "Hey, you'd type a Google search and look, the answer to my question that I typed into Google, it's right on the Google results page."
00:58:56 ◼ ► It's unattributed, and if it was attributed, I don't have to click on any link to get to it because the answer is right there.
00:59:02 ◼ ► And so Google has been sending less and less traffic to websites, and Google Zero is when you notice, "Hey, you know what? You know how much traffic we're getting from Google searches? Zero."
00:59:09 ◼ ► I don't know if it's absolutely zero for everybody, but it's sure going down, and it's a scary world to have what was once the massively largest source of your traffic to your website disappear.
00:59:22 ◼ ► But yeah, whether or not it is wise to try to ask to be excluded from whatever AI crawler thing, from whatever open AI, Perplexi, or whatever, I think most publishers just simply want that choice.
00:59:38 ◼ ► And to have that choice, the crawlers need to agree, because again, there is no technical way to stop this short of putting your entire site behind a paywall, and even that's not going to stop them because they'll just pay and have their crawler go through it.
00:59:49 ◼ ► That's the thing about publishing on the web. It's like DRM. You want people to see your movie. You can't make it impossible to see your movie. You have to give the viewer an ability to see your movie.
01:00:01 ◼ ► But once you give the viewer the ability to see your movie, they can see your movie. But what if they see it but also record it? I want them to see it but not be able to see it. Can I do that? And the answer is no.
01:00:12 ◼ ► So if you're publishing on the web, it's like anything else. That's why Margot was right to call this a legal thing. Things are published all the time. They were published in paper, books or whatever.
01:00:23 ◼ ► It's like, "But I can take the book and look at it. I can see all the letters in it. Ha ha, the book is mine!" Well, no, actually we have laws about the stuff that's in that book.
01:00:31 ◼ ► We have this thing called copyright. And even though you can technically read it and you can technically copy it increasingly more easily over time with technology, we have laws surrounding it to control what you can do it.
01:00:42 ◼ ► And robots.txt, people who think of robots.txt as some kind of technological bank vault, it's no more of a bank vault than you could put on a book. You do want people to read it and you can't stop them from being able to copy it.
01:00:53 ◼ ► And these days it's really easy to copy a book, especially if it's an ebook, setting aside the whole DRM thing. What you want is some kind of, either in a sort of polite society, an agreement among the large parties that actually are significant to get along.
01:01:09 ◼ ► And then failing that, you want laws to provide whatever protections you think are due to you. And yeah, the Google search stuff has, I feel like, been hashed out probably in the altivists of the days, but who knows.
01:01:19 ◼ ► The AI stuff has not yet been hashed out. And so moving on to this next one, because we have a lot of these items, Microsoft, at least someone in Microsoft, has a very interesting notion of what the deal is on the web and potentially what the laws should be surrounding it.
01:01:35 ◼ ► So this is a post on the verge by Sean Hollister who writes, "Microsoft AI boss Mustafa Suleiman incorrectly believes that the moment you publish anything on the open web, it becomes quote unquote freeware that anyone can freely copy and use.
01:01:49 ◼ ► When CNBC's Andrew Ross Sorkin asked him whether AI companies have effectively stolen the world's IP, Mustafa said, "I think that with respect to content that's already on the open web, the social contract of that content since the 90s has been that it is fair use.
01:02:03 ◼ ► Anyone can copy it, recreate it, reproduce with it. That has been freeware, if you like, and that's been the understanding.
01:02:13 ◼ ► Microsoft is currently the target of multiple lawsuits alleging that it and open AI are stealing copyrighted online stories to train generative AI models, so it may not surprise you to hear Microsoft exec defended as perfectly legal."
01:02:25 ◼ ► I just didn't expect them to be so very publicly and obviously wrong. I'm not a lawyer, right, Sean? And that's also true for me. But I can tell you that the moment you create a work, it is automatically protected by copyright in the US.
01:02:36 ◼ ► You don't even need to apply for it. You certainly don't void your rights just by publishing it on the web. In fact, it's so difficult to waive your rights that lawyers had to come up with special web licenses to help.
01:02:46 ◼ ► This is so gross. I'm not as riled up as a lot of people about these AI bots crawling my website. It's sitting here now. I don't find it that off-putting. I don't love it, but whatever. This, though? This is disgusting.
01:03:03 ◼ ► This is such a weird statement because everybody knows how copyright works. I'm sure this person knows as well. But to say that once you put it on the web, it's freeware, which is a term that mostly applies to software. But the point is you can recreate it, reproduce it, copy it. No, no, no. Those are specifically the things we actually do have laws around.
01:03:24 ◼ ► We don't have laws around or the more complicated things like, "Can I train AI on it?" or whatever. We'll get to that in a little bit. But it's such a weird thing to say that, "Oh, as everyone knows, since the '90s, once you put it on the web, you forfeit all ownership." That's not true at all.
01:03:38 ◼ ► One of the things that's great about the web is, "Oh, it's just like books. It's printed word." Especially in the beginning, it was just a bunch of words. And we already have laws surrounding that. And that's why there were cases about search engines. Are search engines copying it? Because we've got this whole giant library of laws about copying text. My website has text on it, and Google's copying it. And they've had to duke it out and say, "Actually, what Google's doing is fine within these parameters, blah, blah, blah."
01:04:05 ◼ ► But that fight was fought because it was an example of copying. But yeah, this... I mean, obviously, the Microsoft's AI leadership, this guy is not a lawyer either. But that's not how you should defend this. You shouldn't defend it by saying, "Everything on the web is a free-for-all." Because that's never the way it's been, and it's not the way it is now.
01:04:27 ◼ ► This is yet another foot-in-the-mouth problem from Microsoft. I'm not sure what's going on over there, but they really need to take a lesson from Apple and maybe try to speak with one voice instead of having individual lieutenants make really terrible statements to the press.
01:04:41 ◼ ► So Louis Mantia writes with regard to permissions on AI training data from the 22nd of June. Louis writes from John Gruber today on the 22nd of June, "It's fair for public data to be excluded on an opt-out basis rather than included on an opt-in one."
01:04:57 ◼ ► And then Louis continues, "No, no it's not. This is a critical thing about ownership and copyright in the world. We own what we make the moment we make it. Publishing text or images on the web does not make it fair game to train AI on. The public in public web means free to access. It does not mean free to use.
01:05:15 ◼ ► Also, whether reposting my content elsewhere is in good faith or not, it is now up to someone other than me to declare whether or not to disallow AI training web crawlers in their robots.txt file. To add insult to injury, that person may not have the knowledge or even the power to do so if they're posting content they don't own on a site that they also don't own, like social media."
01:05:36 ◼ ► So this is so close to getting to the crux of this. In the first little paragraph here, he's basically declaring that training AI on your data is exactly the same as copying and reproducing it. And that is not something that the world agrees on.
01:05:52 ◼ ► Louis' opinion is that it is. The courts have not yet weighed in. I think to the average person they would say, "Are those the same things? Because they seem like they might be a little bit different." Kind of in the same way that indexing your content in Google is a little bit different than just literally copying it and reposting it on the website, right?
01:06:10 ◼ ► But anyway, if you agree that it's the same as copying, then yeah, sure. But then the second bit is getting to even more of the heart of it here, which is like, okay, so let's say we do agree that it's the same. Not proven yet, but anyway.
01:06:23 ◼ ► What about when somebody posts a link to your site on a social media network and on that website they do a little embedding, inlining of the first paragraph or whatever?
01:06:34 ◼ ► Like what if someone copies and pastes a paragraph of your thing on another website, right? Even if you had absolute, somehow magical technical control to stop AI crawlers crawling your website, if people can read your website and quote from it or embed little portions of it or a screenshot or whatever on other websites, of course you don't control those other websites.
01:06:54 ◼ ► And so if they allow crawling, your stuff's going to end up in the Google search index, in the AI training model or whatever, even though you disallowed it from your website.
01:07:04 ◼ ► And I would say that for the most part, that we also have laws covering can someone take a portion of the thing that you made and quote it elsewhere.
01:07:13 ◼ ► There's all legal framework deciding whether that is fair use or not, and it's complicated and the law is not a deterministic machine as Neil A. Patel, who I mentioned before, is always fond of saying.
01:07:23 ◼ ► But we do have a legal framework to determine, can I copy and paste this paragraph from this thing on this person's site and quote it on my site so I can comment on it?
01:07:31 ◼ ► Yeah, in general you can. Can I make a parody of this article on my website? Yeah, you can. There's a whole bunch of things around that have been fought out in court that we have a system for dealing with.
01:07:44 ◼ ► But all of those things, the court determines, you sue them and they say actually this person was allowed to quote that snippet, you lost your fair use case because it's pretty open and shut, that's fine.
01:07:54 ◼ ► That just got indexed by an AI training bot because that person's website allows them, you know, the polite AI bots, never mind again, never mind that you can't stop them.
01:08:04 ◼ ► That's just the nature of publishing. No matter what, you do not have absolute control over every single character that you made.
01:08:14 ◼ ► You do have control over the entire work and the reproduction of the entire work, but you don't have control over other examples of fair use.
01:08:21 ◼ ► And Louie's saying, oh, it shouldn't be like, I shouldn't have to opt out, the default should be that nobody can crawl me.
01:08:27 ◼ ► I mean, that's just not only is it technically impossible, but like, that's not the way the web has ever worked.
01:08:36 ◼ ► It has always been, we're going to crawl you unless you tell us don't. And even the polite ones, you know, they'll read the thing that you said not to do it, but by default, they're going to crawl you.
01:08:46 ◼ ► And I think asking for a world where everything you publish on your website is not only not crawlable by the things you don't want it to crawl up, but also not able to be quoted by other people is clawing back rights that we've already decided belong to other people through fair use.
01:09:11 ◼ ► We talked about this before of like, hey, Louie Mantin doesn't want people crawling his website. What can he do about it? He's just one person.
01:09:17 ◼ ► The music industry, they have a lot of money. They have a lot of IP. This is where the stuff really starts going down.
01:09:25 ◼ ► Yeah. So reading from Ars Technica on the 24th of June, Universal Music Group, Sunny Music and Warner Records have sued AI music synthesis companies, Udeo and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music generating AI models.
01:09:40 ◼ ► The lawsuits filed in federal courts in New York and Massachusetts claim that the AI company's use of copyrighted material to train their systems could lead to AI generated music that directly competes with and potentially devalues the work of human artists.
01:09:58 ◼ ► And that quote is, "These are straightforward cases of copyright infringement involving unlicensed copying of sound recordings on a massive scale. Suno and Udeo are attempting to hide the full scope of their infringement rather than putting their services on a sound and lawful footing."
01:10:18 ◼ ► Mikey Schulman, the CEO of Suno, says that the company's technology is "transformative" and designed to generate completely new outputs, not to memorize and regurgitate pre-existing content.
01:10:34 ◼ ► Reading from the lawsuit, "The use here is far from transformative, as there is no functional purpose for Suno's AI model to ingest the copyrighted recordings other than to spit out new, competing music files. That Suno is copying the copyrighted recordings for commercial purpose and is deriving revenue directly proportional to the number of music files it generates further tilts the fair use factor against it."
01:10:57 ◼ ► Andy Baio writes, "404 Media pulled together a video montage of some of the AI-generated examples provided in the two lawsuits that sound similar to famous songs and their recording artists."
01:11:07 ◼ ► Then finally, we'll put a link in the show notes to a Verge article that discusses what the RIAA lawsuits mean for AI and copyright.
01:11:15 ◼ ► You know, I saw somebody say this a few days ago. I don't remember who exactly it was, but what's going on if the RIAA are suddenly the good guys?
01:11:27 ◼ ► This is the tricky bit with this. We talked about this with image generators. This is significant because they're big, rich companies and you have to take them seriously when they bring a lawsuit.
01:11:35 ◼ ► Who can stop open AI and Google and whatever? Well, it's Clash of Titans. You need other Titans in here to be duking it out.
01:11:43 ◼ ► I think this needs to be fought out in a court in some way. I say that before we see what the result will be because maybe the result is not what we want to happen.
01:11:56 ◼ ► But as with the image things, these companies that you type in a string and they produce a song for you, these models are trained on stuff.
01:12:07 ◼ ► And these record labels say, yeah, you trained them on all our music. It gets back to the question, is AI training something? How does that relate to copying?
01:12:16 ◼ ► Is it just like copying? Is it not like copying at all? Is it somewhere in the middle? Do any of our existing laws apply to it?
01:12:22 ◼ ► And we've discussed this on past episodes as well, especially when the company doing the training then has a product that they make money on.
01:12:32 ◼ ► And as I said with the image training, these models that make songs are worthless without data to train them on.
01:12:39 ◼ ► The model is nothing without the training data. This company that wants to make money, you pay us X dollars, you can make Y songs. That's their business model.
01:12:47 ◼ ► They can make zero songs if they have not trained their model on songs. So the question is, where do those songs come from?
01:12:54 ◼ ► If they've licensed them from somebody, if they made the songs themselves, no problem. Again, Adobe training their image generation models entirely on content they either own or licensed.
01:13:06 ◼ ► Nobody's angry about that. That's the thing you're doing. You own a bunch of images, you license them from a stock photo company or whatever, you train your models on them, you put the feature into Photoshop, you charge people money for Photoshop, they click a button, it generates an image.
01:13:20 ◼ ► Whether people like that feature, whatever, legality seems fine. These other situations where it's like, hey, we crawled your site because we don't care about your robust text.
01:13:29 ◼ ► We trained our models on your data, on your songs, on your whatever. And by the way, we have no idea if these companies actually paid for all the songs. Let's just assume they did.
01:13:38 ◼ ► They bought all the songs from Sony Music, Warner Records or whatever, or they paid for a training service. They got all the songs, they trained their model, and then they're charging people to use their model.
01:13:48 ◼ ► Just like the image processing, I've always thought that if you have a business that would not be able to exist without content from somebody that you did not pay anything for, that is very different than, oh, we trained an AI model for research purposes, or we trained it for some purchase that is not literally making money off of you.
01:14:11 ◼ ► This particular case is like, okay, not just that they're making money, but the thing they're providing is "not transformative." They keep using that word because that's one of the tests for fair use. Is the work transformative?
01:14:22 ◼ ► Have they taken the thing that existed but made something new out of it? And they'll argue that in court whether it is or not is not transformative.
01:14:30 ◼ ► And also, is it a substitute? This is another one of the fair use tests. Is it a substitute for the product? Is someone not going to buy a Drake album because fake Drake sounds just as good and they just listen to fake Drake?
01:14:44 ◼ ► Is it a substitute for it? Doesn't mean does this sound exactly like it. That's a whole other sad area of law of like, does song A sound too much like song B and they have to pay them whatever when they're all made by humans?
01:14:54 ◼ ► This is like, would someone pay for this instead of paying for this? Is one a substitute for the other? And that's what they'll be duking it out about.
01:15:05 ◼ ► But I think at its root, it is sort of like, where does the value of this company come from? Every company has to take inputs from somewhere.
01:15:16 ◼ ► They manufacture something and they sell it to you. Or they have a service, they wrote the software for it, they pay someone to run the servers and they sell it. There's sort of a value chain there.
01:15:24 ◼ ► And a lot of these companies are like, we would make more money if we don't have to pay for the things that make our product valuable.
01:15:32 ◼ ► So we don't want to have to license all the music in the world, but we do want to train an AI model on all the music in the world so that we can make songs that sound as good as all the music in the world, but we don't want to have to pay for any of that.
01:15:44 ◼ ► And that seems to be not a good idea from my perspective. And there's different ways you can look at this. Moral, ethical, legal. I think one of the frameworks that I'm falling back on a lot is practical.
01:16:00 ◼ ► For any given thing, say, if we allowed this to happen, would it produce a viable, sustainable ecosystem? Would it produce a market for products and services? Would it be a rising tide that lifts all boats? Or would it burn the forest to the ground and leave one tree left in the middle?
01:16:22 ◼ ► That practical approach, people like to jump on it, like we talked about before with Vitici and Mac Stories and everything, they want to go to the moral and ethical thing. They're stealing from us. It's our stuff. They have no right.
01:16:33 ◼ ► And even when I was saying before, they don't want to pay for this stuff, but they want to make money off of it or whatever. But practically speaking, and this is not the way the law works, but this is the way I think about it, practically speaking, I'm always asking myself, if this is allowed to fly, what does this look like? Fast forward this. Is this viable?
01:16:50 ◼ ► If everyone's listening to fake Drake, does the next Drake not be able to make any money? Does human beings making music become an unviable business and all this is just an increasingly gray soup of AI-generated stuff that loops in on itself over and over again?
01:17:07 ◼ ► We have the same thing with publishing on the web. Does Google destroy the entire web because no one needs to go to websites anymore and they just go to Google? Unfortunately, when these cases go to court, no one is thinking that. That's not how the law works again. The law is going to be, is this fair use? Does Congress pass new laws related to this or whatever?
01:17:27 ◼ ► But what I really hope is that the outcome of all these things and the thing I'm always rooting for is, can we get to a point where we have an ecosystem that is sustainable?
01:17:37 ◼ ► Which means it's probably, whatever they're suing for, I think they want $150,000 for every song or something. That is not a sustainable solution. You can't train an AI model when you pay $150,000 for each song that you trained it on because you need basically all the songs in the world.
01:17:51 ◼ ► That's a big number. That's stupid. We do want AIs that can make little songs. I think that is a useful thing to have. So we need to find a way where we can have that but also still have music artists who can make money making actual music, setting aside the fact that the labels take all the money and the artists get barely anything anyway.
01:18:22 ◼ ► I really hope that the outcome of this is some kind of situation where there's something sustainable. I keep using ecosystem but it's like you have to have enough water, the whole water cycle, this animal eats that animal, it dies, it fertilizes the plant.
01:18:39 ◼ ► The whole sustainable ecosystem where everything works and it goes all around in a circle and everything is healthy and there's growth but not too much and not too cancerous and it's not like everything is replaced by a model culture and only one company is left standing and all that good stuff.
01:18:53 ◼ ► But right now the technology is advancing in a way that if we don't do something about it, the individual parties involved are not motivated to make a sustainable ecosystem, let's say. That's kind of what the DMA is about in the EU and these AI companies definitely are not motivated to try to make sure they have a sustainable ecosystem.
01:19:14 ◼ ► They just want to make money and if they can do it by taking the world's music and selling the ability for you to make songs that sound like it without paying anything to the music that they ingested, they're going to try to do that.
01:19:24 ◼ ► It's all just so weird and gross and it's hard because I don't want to be old man who shakes fists at clouds, right? And it seems like AI for all the good and bad associated with it is a thing. It's certainly a flash in the pan for right now but I get the feeling that where blockchain and Bitcoin and all that sort of stuff
01:19:53 ◼ ► was very trendy but anyone with a couple of brain cells to rub together would say, "Eh, that's all going to fade" or "It's certainly not going to work the way it is today."
01:20:03 ◼ ► I think there's a little of that here but I get the feeling that this is going to stick around for a lot longer and I think that there needs to be some wrangling done, some legal wrangling.
01:20:15 ◼ ► I get the move fast and break things mentality of these startups that are doing all this but it just feels kind of wrong. Again, I'm not nearly as bothered by it as some of our peers are but it just doesn't feel right.
01:20:31 ◼ ► It definitely doesn't feel sustainable, practically speaking. Regardless of how you feel about right or wrong, if we just let them do this and these models get better and better and produce more and more acceptable content, you can see that it's taking, again, regardless of how this lawsuit ends up with the whole record labels, you can see that it is taking value away from human beings making music and pushing that value to models making music.
01:20:57 ◼ ► But those models are absolutely worthless without that human-generated music, at least initially. Again, maybe in the future there will be models trained entirely on model-generated music but then you have to trace it back to where that model got trained.
01:21:09 ◼ ► In the end, these models are trained on human-created stuff and there may not be enough officially licensed human-created stuff to train them on at this point.
01:21:19 ◼ ► I think we want these tools. They are useful for doing things. Even if you think, "Oh, they make terrible music," sometimes people need terrible music. Sometimes people just need a little jingle. They can describe it. They want it to be spit out.
01:21:37 ◼ ► They do useful things. Unlike cryptocurrency, which does a very, very small number of useful things and is not in general purpose, the AI models do tons of useful things. Apple is building a bunch into their operating systems. People use them all the time. They do tons of useful things.
01:21:53 ◼ ► We should find a way for them to do those things without destroying the ecosystem. I think we can find a way for that to happen. If you look at the awful situation with Spotify and record labels and music artists, that's a pretty bad version of this.
01:22:10 ◼ ► And yet still it is better than Spotify saying, "We're going to stream all these songs for free and not pay anybody," right? I wish I could find that article for the notes. I'll try to look it up.
01:22:19 ◼ ► But even that is better than the current situation with AI, which is like, "We're just going to take it all for free. Come sue us." And they say, "Okay, we are suing you," and they'll battle it out in court.
01:22:30 ◼ ► Either way this decision goes with the music thing, it could go bad in both directions. Because if they say, "Oh, you're totally copying this music. All AI training is illegal." That's terrible. That's bad. We don't want that, right?
01:22:41 ◼ ► And if they say, "No, it's fine. It's transformative. You can take anything you want for free." That's also bad. So both extremes of the potential decision that a court could make based on this lawsuit are really bad for all of us for the future.
01:22:54 ◼ ► So that's why I hope we find some kind of middle ground. Like, again, with Spotify, they came up with a licensing scheme where they can say, "We want to stream your entire catalog of music. Can we figure out a way to exchange money where you will allow that to happen legally?"
01:23:09 ◼ ► And they came up with something. It's not a great system they came up with. Again, if I can find that article, you can read it and see how bad it is. But they didn't just take it all for free. And the music labels didn't say, "Okay, but every time someone streams one of these songs, it's 150 grand."
01:23:23 ◼ ► That's also not sustainable. So obviously, they're staking out positions in these lawsuits, and they're trying to put these companies out of business with big fees or whatever. But yeah, it's scary. It's scary when titans clash. And I do worry about how the result of these cases are going to be.
01:23:39 ◼ ► But I either think we have to have these cases, and I know this is ridiculous in our country, or we have to make new laws to address this specific case, which is different enough from all the things that have come before it that we should have new laws to address it.
01:23:52 ◼ ► And it would be better if those laws weren't created by court decisions. But our ability and track record for creating technology-related laws for new technology is not great in this country. So there's that.
01:24:06 ◼ ► Yeah, and then it continues because Figma, a popular, I don't know how to describe this, like a user interface generation tool. Yeah, design tool, thank you. They pulled their AI tool after criticism that it blatantly ripped off Apple's weather app. So this is The Verge by Jay Peters.
01:24:26 ◼ ► Figma's new tool MakeDesigns lets users quickly mock up apps using generative AI. Now it's been pulled after the tool drafted designs, and it looked strikingly similar to Apple's iOS weather app. In a Tuesday interview with Figma CTO Chris Rasmussen, I asked him point blank if MakeDesigns was trained on Apple's app designs. His response? He couldn't say for sure.
01:24:45 ◼ ► Figma was not responsible for the training the AI models used at all. Who knows who trained it? It's just our model. We don't know who trained it. Does anyone know who trained it? We just found it on our doorstep and it's just a model.
01:24:58 ◼ ► Yep, will the real trainer please stand up? Quote, we did no training as part of the generative AI features, Rasmussen said. The features are quote, powered by off the shelf models and a bespoke design system that we commissioned, which appears to be the underlying issue. So if you commissioned it, then you should know. We had someone else do it and they gave it to us and we just took it and we're like, we didn't ask too many questions. It's fine. Whatever you got, just give it. It's probably fine.
01:25:22 ◼ ► The key AI models that power make designs are open AI's GPT 4.0 and Amazon's Titan image generator G1. According to Rasmussen, if it's true that Figma didn't train its AI tools, but they're spitting out Apple app lookalikes anyway, that could suggest that open AI or Amazon's models were trained on Apple's designs.