00:06:59
◼►
This would then replace the Siri that we would have whenever, you know, so like the Siri we
00:07:07
◼►
have now, plus any improvements that we get, this system will just straight up replace that.
00:07:15
◼►
It also is referenced in this article that to do this, Apple may need a model powerful enough that Google is going to have to run it themselves for them, which is fascinating.
00:07:30
◼►
The system that is being referred to will be integrated with Apple's apps.
00:07:35
◼►
It can analyze and control and open Windows.
00:07:41
◼►
It can anything that is in one of Apple's apps that should be able to go and control.
00:07:45
◼►
There's a lot of conversation about, like, it being a chatbot, but there's also, in Mark's article, there seems to be some arguments inside of Apple as to whether they would actually have it as an app that you could use.
00:07:59
◼►
Like, it may have all of the, or the majority of the features that a chatbot like ChatGPT, Claude, would have, but it may not be like a dedicated icon on a home screen that you launch.
00:08:29
◼►
Two completely different experiences coming out this year.
00:08:32
◼►
So, in the spring, basically Apple making good on their promise from 2024 for a more personalized, contextually aware Siri based on voice mode.
00:08:45
◼►
So, a voice-only Siri that runs on a version of Google Gemini running on Apple Silicon with private cloud compute that is going to be able to answer word knowledge questions.
00:08:59
◼►
So, search the web, summarize, use citations.
00:09:01
◼►
But also, take data from your apps and perform actions inside apps.
00:09:09
◼►
This is going to be a more powerful voice-based Siri with on-screen awareness, personal context, performing actions inside apps, running on Apple's cloud.
00:09:23
◼►
That it's not going to be a chatbot with persistent chats, persistent conversations, or memory, right?
00:09:31
◼►
And then Mark is saying, later in 2026, and this is going to be their big announcement at WWDC, they're going to have a proper chatbot.
00:09:39
◼►
You will be able to ask text questions, have persistent conversations, upload file attachments, so images, maybe documents, and have your regular chat GPT-like experience.
00:09:52
◼►
And this is going to be based on a more powerful version of Gemini.
00:09:56
◼►
And Apple is considering running this on Google's TPUs.
00:10:00
◼►
So, the TPUs are the tensor processing units.
00:10:02
◼►
This is the Google chip made for AI, the latest gen.
00:13:24
◼►
Like, it's creating an isolated space on the Google TPUs.
00:13:28
◼►
And I'm going to read you from the announcement.
00:13:30
◼►
Private AI compute is a secure, fortified space for processing your data that keeps your data isolated and private to you.
00:13:36
◼►
It processes the same type of sensitive information you might expect to be processed on device.
00:13:42
◼►
Within its trusted boundary, your personal information, unique insights, and how you use them are protected by an extra layer of security and privacy in addition to our existing AI safeguards.
00:14:35
◼►
Like, now, if this was a brand new thing, like, completely new, it maybe wouldn't be as weird.
00:14:43
◼►
But, like, you think over time, you know, they're like, we're taking more and more out of the cloud to be more on device because that's even more private.
00:14:51
◼►
You know, like, even from us, that's more private.
00:14:53
◼►
And then it's like, we've gone to these great lengths of explaining and creating this technology called private cloud compute.
00:14:59
◼►
And it's going to, you know, it happens on our service, but we can never do it.
00:15:03
◼►
And then, like, well, now it's just Google's going to do it.
00:15:06
◼►
And, like, even though it sounds like it's the same kind of system, it is still weird.
00:15:13
◼►
And it's also weird to imagine them doing something like this when we're also, like, trying to believe or be led to believe that Apple is working on their own technology.
00:15:25
◼►
You know, like, even in German's article, he's saying, like, oh, this system is designed to be replaced.
00:15:30
◼►
But, like, it just feels like it gets harder and harder to do that if you're doing less and less of it.
00:15:37
◼►
Another area where I think Apple was caught flat-footed, so to speak.
00:15:43
◼►
Why do you think that the major companies in AI right now, both NVIDIA and OpenAI, are investing in startups that run large language models, not on GPUs?
00:16:01
◼►
So in case you're not familiar with this, NVIDIA took a very weird deal that they announced just before the end of the year, I believe.
00:16:11
◼►
NVIDIA took a majority stake or something like that in this startup called Grok, not the Elon Musk thing.
00:16:20
◼►
Grok with a Q is an American company that developed their own chip for running inference.
00:16:29
◼►
So running large language models with a completely different architecture from your traditional GPU.
00:16:36
◼►
Right now, most of the AI industry is predicated upon NVIDIA.
00:16:44
◼►
And NVIDIA making GPUs to run AI models on.
00:16:50
◼►
GPUs became popular for running large language models because they are more powerful than CPUs, obviously, and they have more addressable space.
00:17:00
◼►
And, you know, certain instructions that you want to run when it comes to the parameters and the layers of a large language model, they perform better on a GPU.
00:17:08
◼►
But still, GPUs are not ideal for running extremely large language models.
00:17:14
◼►
That's why NVIDIA recognized this and invested in Grok.
00:17:19
◼►
At the same time, OpenAI, just a couple of weeks ago, announced we are teaming up with Cerebras, another American company that created this very large, physically large chip.
00:17:32
◼►
That is a custom created wafer style chip for running AI models on.
00:17:39
◼►
All this to say, Google saw this coming hundreds of miles away, years ago, when they created the TPUs.
00:17:48
◼►
So, specific chips that are designed for AI.
00:17:51
◼►
Google recognized many, many years ago that you cannot possibly imagine that 10 years from now, you would have to be bound to the structure of a GPU to run a large language model on.
00:18:05
◼►
That's why they invested on their proprietary silicon just for AI.
00:18:09
◼►
Now, NVIDIA, which is, you know, NVIDIA has everything to lose at this point, right?
00:18:14
◼►
And they're also saying, well, we're going to take a majority stake in Grok.
00:18:17
◼►
Because they know that the GPU architecture has probably hit a threshold when it comes to running AI.
00:18:26
◼►
It's funny, really, if you don't mind me, just a quick interjection on this.
00:18:30
◼►
Like, I remember the time, as I'm sure you do, when processing moved from CPU to GPU, because it was more efficient and more powerful doing certain tasks.
00:19:17
◼►
They were not able to deliver on that.
00:19:19
◼►
They are two years late to their voice-only assistant features that they announced two years ago.
00:19:26
◼►
And now they're realizing that they need to do a chatbot because people like using them because they've made plenty of progress in the span of two years.
00:19:33
◼►
And they're realizing, oh, well, there's potentially we're not going to be able to run this chatbot with this version of Gemini 3.
00:19:43
◼►
They're potentially not going to run on Apple Silicon because we don't have the infrastructure.
00:19:46
◼►
And so I think this move from the GPU to a dedicated hardware for...
00:19:58
◼►
Specifically designed for the, you know, to very much to simplify for the three stages of AI, which is training inference, which means when it runs.
00:20:12
◼►
And even arguably you can consider the post-training stuff that comes before you ship an LLM to consumers.
00:22:33
◼►
You can go to the, you can go to the EU and say, look, we're going to kind of deal with Mistral and we're going to have Mistral in Apple intelligence.
00:22:54
◼►
And also potentially, by the way, something that Apple can monetize over time.
00:23:00
◼►
I mean, at this point, if you are becoming a model provider, I could realistically imagine a scenario where you have your Siri AI, whatever the product is going to be called.
00:23:12
◼►
And there's going to be a subscription for Siri AI plus.
00:23:14
◼►
And it's going to give you a reasoning models.
00:23:16
◼►
It's going to give you extended thinking times, more features.
00:23:20
◼►
At least it goes into iCloud plus, right?
00:23:44
◼►
They use Microsoft and Amazon, I think, right?
00:23:46
◼►
They use Azure and AWS to serve some of their files.
00:23:49
◼►
I don't know exactly how it breaks down.
00:23:52
◼►
But like they, you know, and they're like, and I'm sure that they still make the exact same privacy claims because they have very specific ways that they do things, right?
00:24:01
◼►
Maybe there is a scenario where they can still feel comfortable to call it private cloud compute, but they don't host it.
00:24:11
◼►
Maybe I'm overthinking it and it just won't matter.
00:24:14
◼►
The other interesting point is that I can also imagine a scenario in which this two-phase rollout is actually based on two different Gemini models.
00:24:27
◼►
Based off what we're going right now, we have Gemini 3 Flash and Gemini 3 Pro.
00:24:34
◼►
3 Pro is the smarter, slower, but more capable model.
00:24:39
◼►
Gemini 3 Flash, based on the same Gemini 3 family of models, but more efficient, doesn't think as long.
00:24:48
◼►
All of the Gemini 3 models are reasoning models.
00:24:50
◼►
That's something that Google is doing behind the scenes.
00:24:52
◼►
But Gemini 3 Flash is faster and more efficient.
00:26:32
◼►
But I wouldn't be shocked to see this kind of scenario where voice mode means efficiency, intelligence still, but speed and more efficient.
00:26:41
◼►
And chatbot interaction means a little bit slower, but more intelligent and more capable.
00:28:25
◼►
But you know what I think is really smart here?
00:28:27
◼►
Is that there's a really potential funny scenario on the horizon where you get a much more integrated, Gemini-based experience on Apple platforms than you do on Android.
00:28:47
◼►
It's not a lot of Android phones, you know?
00:28:51
◼►
Because, like, Gemini 3 is capable of calling external tools and external integrations.
00:28:58
◼►
But the ecosystem that, like, the Gemini app experience is absolutely terrible right now.
00:29:05
◼►
They don't expose integrations with third-party connectors, save for maybe literally two of them.
00:29:14
◼►
Right now, I think WhatsApp and Spotify, they don't have an ecosystem of integrations.
00:29:18
◼►
And Apple has the potential to come in and say, well, we're taking this model from Google.
00:29:22
◼►
We're opening it up to the vibrant ecosystem of developers on Apple platforms.
00:29:27
◼►
And it would be so funny if you get a much more integrated Gemini with your operating system on multiple platforms, from the desktop to, I don't know, a phone, right, that works with your apps, works with your Windows, works with the file system.
00:29:43
◼►
And meanwhile, on Android, you get none of that.
00:30:25
◼►
Trying to grab through them and line them up with traces and dashboards just to understand one issue, that ain't the move.
00:30:32
◼►
Sentry, S-E-N-T-R-Y, has logs, but they've made them usable.
00:30:36
◼►
Sentry's logs are trace-connected and structured, so you can follow the request flow and filter by whatever matters.
00:30:42
◼►
And because Sentry surfaces the context right where you're debugging, the trace, relevant logs, error, and even the session replay all land in one timeline.
00:30:51
◼►
No need for timestamp matching or tool hopping.
00:30:54
◼►
Front-end, mobile, back-end, whatever you're debugging, Sentry gives you the context that you need so you can fix the problem and move on.
00:31:02
◼►
More than 4.5 million developers use Sentry, including teams like Anthropic and Disney+.
00:31:09
◼►
Try it for free today at S-E-N-T-R-Y.io and tell them that we sent you.
00:31:14
◼►
They have a free dev plan, and listeners of this show can use the code CONNECTED26, and they'll get $100 in Sentry credits.
00:31:22
◼►
That's S-E-N-T-R-Y.io and the code CONNECTED26.
00:31:26
◼►
Our thanks to Sentry for their support of this show and Relay.
00:31:31
◼►
So we were talking about, you know, I think an idea of what this year is going to be, and I think this year is going to be potentially supremely weird, but also fascinating to talk about, which is, you know, I personally can't ask for anything more than that in my career.
00:31:46
◼►
So I wanted to get kind of a vibe check with you, and we can kind of go through this together, about like how we're feeling about Apple for this year in 2026.
00:32:40
◼►
But I think Apple is now coming out from this slumber, so to speak, you know, and with the new HomePod or home product and the AI features and the foldable and maybe new accessories.
00:32:55
◼►
I don't know, it just feels like they, and obviously, you know, maybe the changing of the leadership, like it's a time for change.
00:33:03
◼►
And as the person who, you know, embraces change, that's inherently exciting for me to see Apple get out of their comfort zone a little bit, you know, try new stuff.
00:33:15
◼►
So yes, that is, my vibe is excited right now.
00:33:18
◼►
I mean, from, I agree with you, like, I think WWDC 2024 was like a real problem.
00:33:28
◼►
You know, like, I think for me, that was a marker, which was not great.
00:33:35
◼►
Like, I was thinking about this yesterday, because I was thinking about like, I was thinking about WWDC and how I want to think about covering it this year.
00:33:43
◼►
And I remember that in 2024, obviously, I was at home, and Jason was busy, he had like briefings and meetings and stuff.
00:33:56
◼►
So we ended up recording the next day.
00:33:58
◼►
And I was always so thankful for that, because I was completely shell-shocked after WWDC 24 had ended.
00:34:06
◼►
Because we all had really different expectations for Apple's attempts at AI than what we got.
00:34:12
◼►
You know, like, I was so disappointed with the image generation stuff.
00:34:18
◼►
And still, I am, I still kind of can't believe that it, that they still continue to ship image backgrounds.
00:34:27
◼►
Just, I think that was just a real mistake, that I really do believe they regret, but they're kind of stuck in it now.
00:34:34
◼►
And just like, you know, having that time to reflect on it was helpful for me, because I wouldn't have known what to say if me and him were going to record straight away.
00:34:45
◼►
And so, like, I was thinking back to that moment, and just like, even the things that, like, the things that were the saving grace of that presentation don't exist, nearly two years later.
00:34:56
◼►
And I really feel like that was a fork in the road for them, or kind of like, you know, like a good timeline, bad timeline kind of scenario.
00:35:09
◼►
And I think there are some aspects of Apple that slipped into the bad timeline from that point.
00:35:14
◼►
Because I am, I definitely entertain the conspiracy theory that Liquid Glass came out earlier than it was supposed to.
00:35:23
◼►
Like, I believe that they were working on a redesign and been working on it for a while.
00:35:29
◼►
But I can absolutely understand a scenario where they were like, boy, we should do that this year.
00:35:37
◼►
I don't know if it happened, but I can imagine a scenario very easily where that got moved up a little bit.
00:35:42
◼►
Because then that was like a whole thing, you know, that like, just dealing with this past year.
00:35:46
◼►
So these last two years on the software side, from the WWDC presentation, it's just not been a good time.
00:35:53
◼►
Like, it's just been an overall bad time.
00:35:56
◼►
And I am very hopeful for WWDC 26, that we get things to be tidied up in general across the operating system, and for them to have a cohesive, sensible, thought through, and useful AI strategy.
00:36:13
◼►
Where there's a ton of stuff happening on device, and the stuff that isn't is happening on good architecture in the cloud that has privacy, and is actually useful to me as a consumer.
00:36:27
◼►
Because I would love, you know, like, I would love to get to a scenario where I don't ever think about what app am I using, where should I be doing this, and that like, I just talk to my phone and it's just handled for me.
00:36:43
◼►
Like, it sounds like a great scenario.
00:36:45
◼►
And in theory, they should be able to do it.
00:36:48
◼►
I still have some, like, a lot of question marks, you know, you reference developers.
00:36:54
◼►
I don't know how developers are going to take to any of this, because I just think, in general, they all hate Apple.
00:37:01
◼►
And so, you know, but maybe if it's good enough, you know, like, maybe if the features are good enough for developers,
00:37:11
◼►
like, there is a scenario where they may want to jump on board and do some stuff with it, you know, like, I've actually been, I would say, heartened to see how many indie developers are using even the current version of the on-device models to do interesting things in their apps.
00:37:29
◼►
And maybe if the technology is good enough, it could incentivize more of them to jump on board and integrate, and that could be really interesting.
00:37:39
◼►
But even outside of that, like, you know, you mentioned a bunch of things, but the iPhone lineup, again, could be super strong again this year, which would be amazing, right?
00:37:51
◼►
If we get, like, if we get, like, a weird folding phone and some really good, like, you know, like, all the rumors are pointing towards a semi-redesign for the pro phone with, you know, in-display face ID.
00:38:06
◼►
That's just like a, I don't even know if it's just something I want, but it'll be interesting.
00:38:10
◼►
And then maybe we get a bunch of laptops this year, right?
00:38:15
◼►
The cheap MacBook, the, I mean, I'm very hopeful that we get the OLED touchscreen MacBook Pro this year.
00:38:31
◼►
And then the, you know, the looming executive change, like, so, yeah, I am both hopeful and excited for 2026, I think, in a way that maybe I didn't feel at the beginning of 2025.
00:38:51
◼►
I think we could be in for an interesting year as well as a year that has lots of good products from a software and hardware perspective, you know?
00:38:59
◼►
I agree, I agree, and I think the tide will shift when it comes also to the developer narrative around Apple.
00:39:14
◼►
Because what I think is happening now, I'm obviously a very online person.
00:39:24
◼►
And I spend time maybe more than I should on all the social media platforms.
00:39:31
◼►
Something that I think is very unfortunate is that a lot of people are still using X and they never, they literally hundreds of people that are interesting people never stopped posting on X.
00:39:47
◼►
They never even considered other platforms.
00:39:50
◼►
But what I think is so fascinating is that, so, set aside for a second the many, many, many political problems with X.
00:40:00
◼►
Just take a look at the people posting there, right?
00:40:08
◼►
But as a result of that, we, I think in the Apple community, we have become, many of us, very insular in terms of the takes and the opinions that we read about Apple.
00:40:18
◼►
If you just spend time on Mastodon, you would think that Apple is absolutely doomed.
00:40:23
◼►
That they're doing illegal things, that they are a corrupted company.
00:40:29
◼►
And I'm not saying, by the way, that Apple, you know, has not done questionable and some despicable things.
00:43:30
◼►
And, like, it is an example of an app which doesn't look or act like really any podcast app that you've seen before, but is really nice looking and is doing some really interesting stuff.
00:44:19
◼►
I think that I am in a place in my life where I am paying less attention to social media takes than I ever have before.
00:44:32
◼►
Like, I think I am just trying to develop more of my own opinions about things, where I think in the past I have been, like, many of us susceptible to more external input.
00:44:51
◼►
And I'm not saying, like, one is right or the other.
00:44:53
◼►
I just, like, I'm, I'm a little bit tired of social media in general.
00:44:58
◼►
And so, like, I'm trying to think more and spend more time reading and trying to find things a little bit more objective and less, like, takey.
00:45:12
◼►
I just don't use social media as much as I used to, which it's not a, I'm not saying anyone should live any particular way.
00:45:22
◼►
I just, I'm just drawn to it less and less over time.
00:45:24
◼►
But, yeah, I, so the people that you're seeing, this, the newer generation of builders, do you think that they're going to want to build on Apple's platforms?
00:46:16
◼►
But I think there's still, and I mean, obviously, like, who knows what's going to happen with vibe coding.
00:46:25
◼►
And there's also that to consider, right?
00:46:28
◼►
That potentially, you know, in the not so distant future, potentially anybody could be just become a quote unquote developer just by prompting an app into existence.
00:46:44
◼►
And if you think that's ridiculous, not only is it happening now, and there are already some, some solutions to turn your, your prompt into something that's on the App Store.
00:46:55
◼►
But I don't think that's, here's another potential hot take.
00:46:59
◼►
I don't think that's necessarily a bad thing.
00:47:03
◼►
And in the history of, of human progress, this has always happened.
00:47:10
◼►
Um, and in more recent memory, you know, it used to be, say that I wanted to start Mac Stories in 1995 instead of 2009.
00:47:22
◼►
I would have had to know HTML or the basics of...
00:48:34
◼►
Yeah, I've been spending some time thinking about this too recently.
00:48:37
◼►
And I, I think that I'm, you know, and, and I know that there might be a lot of people, including you, that may be like, ha ha, poor naive Mikey doesn't know.
00:48:48
◼►
But like, I see what people are doing with these tools and it is fascinating, you know, people can just build apps.
00:48:56
◼►
But I don't think, I struggle to imagine this in a world where this is what everybody does, right?
00:49:05
◼►
Like, I think it would be a nerdy thing that people do, but I struggle to imagine the vast majority of users or even a majority of users just being like, I'll just, I'll ask the AI to make me an app, you know?
00:49:21
◼►
And so that's one part, like, I think it will happen and there will be a lot of people that would do it, but I don't think it's going to be a large portion of people.
00:49:31
◼►
But then we get into the, but the software that these people are using, who made it, right?
00:49:37
◼►
Like the apps that are in the app store, who coded them?
00:49:42
◼►
And I am coming around to developers with these tools.
00:49:49
◼►
Like, I don't, I, I struggle to imagine a world where at the moment, where just saying me, person who does not know development can start shipping apps just because I used Claude code.
00:50:10
◼►
Like, I know you have been doing this stuff, but you, you have a grasp of development.
00:50:19
◼►
More than the majority, I would say, of people interested in technology.
00:50:25
◼►
Like, I know you are not like a developer, but you are, you have built knowledge, right?
00:50:49
◼►
And now we're saying, yeah, you can do it, but like, it's accelerating.
00:50:56
◼►
But, you know, I think of it more of like a, what Photoshop does for design, you know, or like what computers have done for all jobs, right?
00:51:12
◼►
Where it's like, it's not like we don't have accountants anymore, right?
00:51:16
◼►
But, accountants can be much more productive than they were before, because now they have, instead of trying to do everything by hand on paper, they use Excel.
00:51:29
◼►
And there are other skills that accountants can learn to more round out their role to be more useful to their customers.
00:51:39
◼►
And like, I think we may start moving more towards that, where it's like, the role of a developer, let's just focus on indie developers, right?
00:51:49
◼►
The role of an indie developer is, can you create a good idea that you can craft with the LLM to make a good product, as opposed to, can you write every line of code?
00:52:53
◼►
So that anybody can get started and produce something at the base level, but also they're, they're giving the experienced folks much, much higher level tasks to be able to perform.
00:53:07
◼►
Like, they're not a sponsor of this episode.
00:53:10
◼►
Let's just, I just want Squarespace, right?
00:55:06
◼►
I feel, and I think something that we can all agree upon is I think we're all confused right now.
00:55:13
◼►
We're all just, no matter where you look, even the most positive person about AI and the most negative person about AI, we're all just confused because we don't know what's happening.
00:55:21
◼►
And these things are changing so quickly.
00:55:23
◼►
So we cannot agree upon the fact that we are confused right now because it's happening.
00:55:27
◼►
Everything is changing from under our feet so quickly and it's creating new possibilities and it's displacing people, displacing jobs, but also creating new ones.
00:55:36
◼►
And also, lots of people are lying as well, which adds to the confusion.
00:55:54
◼►
This episode is brought to you by Ecamm.
00:55:57
◼►
If you're a Mac user who creates videos or podcasts, you need Ecamm because the Ecamm Live is the all-in-one studio built exclusively for the Mac.
00:56:05
◼►
It looks, feels, and performs like all good native pro apps should.
00:56:10
◼►
Whether you're live streaming, recording a podcast, or producing training videos, Ecamm gives you broadcast-level control with drag-and-drop simplicity.
00:56:18
◼►
You can switch cameras, share your screen, cue overlays and control audio, all without ever leaving your Mac.
00:56:25
◼►
With Ecamm, you can brand your show with titles, graphics, and lower thirds.
00:56:30
◼►
You can also pull in guests via interview mode or record multi-track audio for perfect post-production.
00:56:36
◼►
If you're into automation, Ecamm works beautifully with tools and apps like Stream Deck and Loopback, as well as many Mac tools you already know and love.
00:56:44
◼►
That, for me, is one of the absolute key things about why Ecamm is great because it is an actual native pro Mac app so you can integrate with other software and services that you're using on your Mac, as well as feeling at home.
00:57:00
◼►
You don't feel like you're using a piece of open source software that's been weirdly skinned.
00:57:05
◼►
Like you feel like you're using something of people who know how to make great Mac apps.
00:57:09
◼►
You can upgrade to Pro and unlock Ecamm for Zoom that lets you feed your polished setup straight into Zoom meetings or webinars.
00:57:17
◼►
You can share Zoom comments directly on screen and even capture each participant's audio and video separately for easy post-production work.
00:57:26
◼►
Jason Snell on Upgrade has spoken a bunch of times about using this for recording podcasts and Dungeons & Dragons stuff that they do for video versions and also getting good audio out of it.
00:57:38
◼►
To get 15% off, go to ecamm.com slash connected.
00:57:42
◼►
That's E-C-A-M-M dot com slash connected and use the code connected15.
00:57:47
◼►
That's 15% off at ecamm.com slash connected with the code connected15.
00:57:53
◼►
A thanks to Ecamm for their support of this show and all of Relay.
00:57:58
◼►
You wrote one of the classic big Federico articles like a couple of days ago where it's not like it's not a review, but it's like you've got a bunch of stuff to say about something that you're using.
00:58:10
◼►
And it is essentially a piece of open source software that I assume you install on your Mac that then allows you to talk to it from you're using Telegram.
00:59:27
◼►
So the idea is that you can talk to an assistant powered by an LLM.
00:59:32
◼►
You get to choose which model provider you want to use, and you get to choose the messaging app of your choice to talk to it.
00:59:40
◼►
I believe they're working on a native app experience for iOS and the Mac.
00:59:46
◼►
But right now, you choose one of the existing messaging apps.
00:59:49
◼►
And they have these, they're called gateway connectors for Telegram, WhatsApp, iMessage, even Signal, Discord, Slack, all the messaging services.
01:00:00
◼►
So, and you may say, well, what's the big deal?
01:00:02
◼►
Like, how is it different from CHGPT or Cloud?
01:00:05
◼►
The difference here is that it's running on your computer, and it's, because of that, given the right permission authorizations, in fact, today a new version of it came out with a permission system similar to the one seen in shortcuts where you can allow, always allow commands, stuff like that.
01:00:27
◼►
But because it's running on your computer, it gets access to your computer.
01:00:34
◼►
And that means that it can work with all of your documents, all of your files, can work with your entire file system, and it can run terminal commands, and it can basically improve itself.
01:00:46
◼►
If a feature that you want doesn't exist, you can just ask CloudBot to give itself a feature that it doesn't have by default.
01:01:00
◼►
Well, first of all, it's going to look into the community-made skills and integrations.
01:01:06
◼►
Peter himself creates so many open source projects on GitHub, stuff like integration with Google, integration with Sonos speakers, with Spotify, with you name it.
01:01:18
◼►
Peter is an extremely prolific open source contributor and programmer.
01:01:24
◼►
But then, if it doesn't find anything popular that already exists, or if it doesn't find anything in the community hub, it'll just say, well, if you want, I can write my own feature for that.
01:01:55
◼►
And all I'm going to ask you, he literally said this, like, I'm going to ask you to, CloudBot has a bit of a funny personality.
01:02:02
◼►
That's part of the system prompt that Peter gave it.
01:02:05
◼►
He said, I want you to get off your couch and go press the button on your Philips Hue bridge, and when you do that, I will enable the integration.
01:02:11
◼►
And I was like, wait, am I really talking to an AI that says these things?
01:02:34
◼►
So I want to, you know, I've been using CloudCode, obviously, a bunch.
01:02:39
◼►
And I've also been using CodeX, which is the CloudCode by OpenAI, essentially.
01:02:44
◼►
Same terminal thing, but made by OpenAI.
01:02:48
◼►
And it turns out that those two services, they're not apps, but they save their conversations, the conversation that you have with them locally on your file system.
01:03:01
◼►
The problem is that they save a complete log of their conversation in JSON documents.
01:03:10
◼►
Like, if you open them up and you preview them with QuickLook or something, it's just a wall of text.
01:03:16
◼►
Thousands and thousands of plain text characters.
01:03:19
◼►
And so I was like, okay, I wonder if I can build a little app for me that gives me a nice messaging-like UI to let me read my previous sessions.
01:03:32
◼►
And so I talked to CloudBot and I was like, hey, I know that this open source thing exists for parsing those logs.
01:03:40
◼►
Can you take that and modify it for me so that it's got a liquid glass design and it supports CodeX in addition to CloudCode and has some features that the open source thing doesn't have?
01:03:54
◼►
Like, can you use that as a starting point and then modify it for me?
01:04:19
◼►
And so it went there and it looked and basically 10 minutes later, and I'm staring at it right now in Google Chrome.
01:04:25
◼►
It built a web app that is basically like a fancy little JavaScript-based web page that runs on my local network and that has two tabs for Cloud and CodeX and lets me open one.
01:04:39
◼►
And I can take a look at the entire conversation, my messages are blue, the assistant messages are purple, and I see the entire flow of the conversation presented as if it was, I don't know, like a Slack thread, much more readable UI.
01:04:51
◼►
I see every single bit of code, every single tool call in a UI that makes sense.
01:04:57
◼►
This thing didn't exist, and then it did.
01:05:00
◼►
And for example, you know, I can take a look at this conversation, and then I said, oh, this is great, but these conversations are too long.
01:05:08
◼►
Can you create an integration with Google Gemini so that I can summarize an entire session with Cloud Code using Google Gemini as the summarizer?
01:05:19
◼►
Go in the macOS keychain and save securely your Google Gemini API key.
01:05:26
◼►
And once you did that, come back to me and say that you did it, and I will start using Gemini.
01:05:31
◼►
Five minutes later, it built a Gemini integration so that it can take a look at a session and summarize it for me.
01:05:38
◼►
And then I was like, ah, this is nice, but I kind of wonder if you can create a table of contents for it so that I can jump to a specific point of the conversation.
01:05:46
◼►
Like, all this back and forth, basically created the thing I wanted in 30 minutes, right, on my computer.
01:05:53
◼►
And I can give you, like, I can have, but I won't, because otherwise it's going to be a long monologue.
01:06:02
◼►
I have built, I don't know, 10, 15 different integrations of this kind, things that didn't exist and that CloudBot coded itself and gave to itself.
01:06:17
◼►
Which is obviously not something that you can do with Chagipity or CloudBot.
01:06:20
◼►
Sure, now you have skills and integrations, but it's limited, right?
01:06:24
◼►
Because it's not like you can go to cloud.ai right now and say,
01:06:28
◼►
Cloud, I want you to give yourself a dark theme and to speak to me with a thick British accent.
01:06:35
◼►
Cloud will say, well, that's a nice idea, but I can only do things that my developers at Anthropic give me.
01:06:42
◼►
And this is a different type of experience with CloudBot.
01:06:46
◼►
It's similar in that you can use the popular models, Chagipity, well, GPT, Cloud, Gemini.
01:06:57
◼►
You can run it with local AI if you have a powerful enough computer.
01:07:02
◼►
You can run the entire thing locally with no cloud provider.
01:07:07
◼►
But, because it has access to your computer and your data, it's scary, obviously.
01:07:13
◼►
There's tons of permission dialogues, tons of permission settings.
01:07:17
◼►
But if you're comfortable with it, it's basically like putting a...
01:07:22
◼►
At this point, like, for me, and I'm not exaggerating,
01:07:26
◼►
I felt for the past 10 days or so that I've been playing around with this thing,
01:07:30
◼►
the same way that I felt the first time I tried Chagipity or the first time I tried Cloud,
01:07:35
◼►
it was like your brain connected a bunch of things that you didn't think were possible before, right?