PodSearch

Under the Radar

212: A Watchsmith Adventure

 

00:00:00   Welcome to Under the Radar, a show about independent iOS app development.

00:00:04   I'm Marco Arment.

00:00:06   And I'm David Smith. Under the Radar is never longer than 30 minutes, so let's get started.

00:00:10   So for today, I want you to just sit down in a comfortable chair,

00:00:14   lean back, relax, close your eyes, and just come along with me on a journey.

00:00:18   This is a David Smith story hour, or half hour,

00:00:22   and we're going to talk about an experience, a journey I've just recently concluded going on,

00:00:28   building a feature out in WatchSmith.

00:00:31   And I feel like a couple, several months ago, I did a similar kind of episode where

00:00:36   I feel like sometimes it's fun to just talk through the process that I go through,

00:00:41   or Marco has done a couple of these too, where it's this funny thing of like,

00:00:45   "How did we actually build this feature and some of the weird things we have to deal with along the way?"

00:00:50   And I just went through one of these things where I thought,

00:00:52   "Oh, this would be a pretty straightforward feature," and turned out, "No, no, no."

00:00:55   It was the opposite of that.

00:00:58   So the feature that I've been working on is I wanted to add photo complications to WatchSmith.

00:01:06   So if you imagine, so in Widgetsmith, the most popular widget by far is the photo widget.

00:01:11   That is what people love to use, and I can understand why,

00:01:14   and it seems like an obvious sort of inclusion for me to add to WatchSmith

00:01:18   as part of this big update I'm working on for WatchSmith.

00:01:21   And so I want to be able to allow the user to, on their phone, pick a picture,

00:01:26   adjust what it looks like, getting it just right, and then essentially hit save

00:01:32   and have that appear as a complication on their watch.

00:01:35   It seems relatively straightforward. It's just a picture.

00:01:37   It's going to be shown at a very small resolution,

00:01:40   but it turned out the process of getting from that idea to actually having it work,

00:01:46   which I finally had it to work yesterday after many, many days of banging my head against the wall

00:01:51   and lots of pain and suffering.

00:01:53   And so that's the journey I'm going to take you on now,

00:01:56   and that's the end goal that we're trying to get to.

00:01:58   We're trying to get to a place where I can pick a picture on a phone and have it show up on a watch.

00:02:03   This is one of those things that programmers so often use the wonderful phrase,

00:02:08   "How hard could it be?"

00:02:10   "How hard could it be?"

00:02:12   "We're about to find out."

00:02:14   And when I'm describing the feature, I'm like, "Well, that doesn't sound like it would be that bad,

00:02:17   especially since you've already done it for WidgetSmith."

00:02:20   Yeah. So first thing you need to know about watch programming

00:02:26   is that everything on the watch is just a little bit harder.

00:02:28   It's just a little bit more sort of challenging.

00:02:31   And so the first thing you have to deal with,

00:02:33   and this is one of those things where it is completely trivial in WidgetSmith

00:02:37   and it is very difficult on the watch,

00:02:39   is how do you get that picture from the phone to the watch?

00:02:44   And on the phone, I just add it to the application container,

00:02:49   and it's in a shared space, and both the widget and the main app can read it.

00:02:54   But for the watch, you're going to have to somehow get it over there.

00:02:58   And so the obvious answer is to use watch connectivity.

00:03:02   Though I will say...

00:03:05   What? What could possibly go wrong with sending files over watch connectivity?

00:03:10   Exactly. What could possibly go wrong?

00:03:12   And these aren't big files. It's probably fair to say, too.

00:03:15   The ultimate result for a graphic circular complication is 94 pixels by 94 pixels.

00:03:22   And the extra large circular one, which is the biggest image that I can send,

00:03:27   is 264 pixels by 264 pixels.

00:03:30   These are things that an old, original Macintosh image could probably display.

00:03:37   These are not big things, but you think, okay...

00:03:40   I will say, I did have the thought of,

00:03:42   I wonder if I should upload these to a server and then have the watch download them directly

00:03:46   just so I could avoid watch connectivity.

00:03:48   That's how much I have had problems with watch connectivity over the days.

00:03:52   Though in the end, I decided, no, that's silly.

00:03:54   Then there's all kinds of problems with doing it that way.

00:03:56   Is it silly? I mean, that's literally... I just built my entire watch app from scratch

00:04:00   to have its own direct-to-the-network sync engine,

00:04:03   mostly to avoid the problems I've had for years with watch connectivity.

00:04:07   So that way the watch has to talk to the phone as little as possible.

00:04:11   Yes. And I did think about it, and I did kind of...

00:04:16   The problem that I wanted to avoid is that then it requires an internet connection

00:04:21   to set up your complications,

00:04:23   which maybe is a fine assumption to make,

00:04:26   but it seemed like it was a slightly awkward thing.

00:04:29   If you were out... It's like, imagine you have this scenario where you have to be on good networking,

00:04:36   and especially it gets weird because even if you're out and you have a cellular connection on your phone,

00:04:42   the watch proxying that cellular connection through itself is often just as problematic as watch connectivity.

00:04:48   So, anyway, I decided watch connectivity it is.

00:04:52   So I'm going to use watch connectivity to get these teeny tiny files across, I thought.

00:04:56   So the first thing you think, "Oh, these are files, so maybe I should use the transfer file API in watch connectivity."

00:05:02   Nope. It doesn't work. For whatever reason, I find that the transfer file thing...

00:05:07   When it works, it works flawlessly, but most of the time it doesn't work.

00:05:10   And you hit transfer file, and it's just like transferring,

00:05:13   and you have no idea when it's going to happen.

00:05:16   So you think, "Oh, maybe I should use the message data protocol,

00:05:19   which is a different version of watch connectivity.

00:05:22   It's designed for more interactivity, but turns out because I'm initiating these transfers from the phone,

00:05:28   you can't actually use that side of the protocol unless the watch app is active.

00:05:34   Otherwise it just fails. So I can't use that.

00:05:37   So I'm down to user info, which is the third kind of way to do watch connectivity.

00:05:42   And that seems to work okay for what I'm doing.

00:05:46   But initially I started running into all kinds of payload to large errors with user info.

00:05:55   And so it's like, "Okay, so apparently I'm sending images over this API.

00:05:59   It's not necessarily intended for this. It's probably... You're supposed to be sending data dictionaries back and forth.

00:06:04   So there's a limit somewhere. What is the limit? Who knows?"

00:06:07   So you start googling around, and I find a Stack Overflow comment that's like,

00:06:10   "According to some private symbols, the WC payload size limit user info symbol

00:06:15   says that it's 65,000 bytes."

00:06:18   And so that's what they said. And it's like, "Okay, I need to somehow make sure that all of my images

00:06:22   that I send over this need to only be 65,000 bytes, which should be fine in some ways."

00:06:28   And so the first thing I do at this point is I make a test project,

00:06:32   and I progressively make the payload larger and larger until I sort of cross over the threshold where I get this error back.

00:06:41   And it turned out whoever this person was on this random Stack Overflow form, they were exactly right.

00:06:45   It's about 65,000 bytes. So yay for Stack Overflow, and now I have a goal that I need to work against.

00:06:52   And so now what I need to do is this weird problem where I take the image,

00:06:56   and I progressively use JPEG compression to make it more and more compressed

00:07:02   until it fits in the space that I have available to me.

00:07:06   So I start off with it being good and high quality, and if for some reason...

00:07:09   I've actually done this same logic on my servers for years for thumbnailing.

00:07:14   I have the exact same problem, because if you send an image that's too large to iOS devices,

00:07:21   they might crash while decoding it or trying to hold it in memory.

00:07:25   And so I try to put as much data on the server side as possible, and my thumbnailer actually has different sizes,

00:07:31   and it has different byte-sized thresholds for each size, and it will attempt to maintain 100% quality,

00:07:38   like first with ping, and then if it can't, it will go to high quality JPEGs and then do the same thing,

00:07:42   slowly lower the quality level until it is below that threshold.

00:07:46   However, I have right away, because I have done that process,

00:07:52   I know of a gotcha that might be getting you, which might be a future part of the story that I don't necessarily want to ruin,

00:07:58   but were you removing things like color profiles from the images first,

00:08:03   because those alone can balloon it past that threshold.

00:08:07   And you can keep the image compressing further and further down to quality zero,

00:08:11   and it will still be like 85k, and you're like, "Why is it this big? What could possibly be taking up all that space?"

00:08:17   And it's basically crap in the EXIF data, like large blobs in the EXIF data.

00:08:21   So I haven't run into that yet, though I am writing that down as something that inevitably will come up and bite me at some point,

00:08:28   and I'll need to deal with.

00:08:30   So I think, though, because of the way that I've, of something else that I'm doing,

00:08:35   I think I will be fine, because ultimately the images that I'm, I don't take the raw image and try and compress that down,

00:08:41   I have to resize it first, and I think because I'm resizing it, there's no profile,

00:08:46   because the version that I'm making is just like a very simple basic UI image renderer image.

00:08:52   I don't think it has any of that overhead to it, but I will investigate.

00:08:57   As long as it can't carry forward any of the EXIF data from the original, then you should be okay.

00:09:01   Yeah, because I think it's fine because I'm rendering, you know, I'm doing a UI image rendering context,

00:09:07   and so it should, ultimately I'm just moving the pixels back and forth, so I think I should be fine,

00:09:12   but I will, that is a good thing to investigate.

00:09:15   But anyway, so I have this method now that I can get these images,

00:09:19   because essentially, progressively compressed down until they are small enough that watch connectivity will move them,

00:09:26   so now I have the ability to add to this test project a watch companion app, and I can send these over the watch connectivity,

00:09:34   and then whenever it receives one, it just throws it into a complication.

00:09:37   And it actually worked.

00:09:39   So this was the point of this project where I was like, okay, this is viable.

00:09:43   This is something that theoretically should be possible now, that I've gone through the proof of concept phase,

00:09:49   because for a while I was like, I don't even know if this is possible, that there may be some weird limitation,

00:09:53   and before I went too far down the road with this, I didn't want to invest all the time,

00:09:58   and it turns out I can't even move the images between my phone and my watch.

00:10:01   That just wasn't possible.

00:10:03   But now I know it is possible. It's just tricky.

00:10:06   So now comes the question of, I need to get the images,

00:10:11   and I need to allow you to be able to orient and adjust the images so that they make sense in a tiny little complication.

00:10:17   Because you're talking about, you know, you may have a big, you take a nice image with your iPhone,

00:10:21   and you have this beautiful, what is it, 12 megapixel image,

00:10:24   and you're compressing it down into something that's like 48 pixels, or 96 pixels by 96 pixels.

00:10:30   So I don't want to just smush it down, because that would be pointless.

00:10:33   You wouldn't be able to see anything. It wouldn't make sense.

00:10:36   So I need you to be able to sort of zoom and pan inside of an image.

00:10:41   And this is one of those things where at first, it's like, okay, how do I do this in a way that makes sense?

00:10:48   And this is where using SwiftUI for this application definitely came back to bite me a little bit,

00:10:53   because I feel like in UIKit, this would have been a much, much easier thing,

00:10:57   because the gesture recognizers inside of UIKit are so good.

00:11:02   So that's a little bit of foreshadowing, but I start going down the road of doing this using the SwiftUI gestures,

00:11:08   and I get it vaguely to work, like in terms of, you know, you pick a picture,

00:11:14   which as a side note is using the UIImagePickerController API just wrapped in a SwiftUI view,

00:11:23   because that's the only way that I have found to be able to do this in a reasonable way.

00:11:28   And I love using UIImagePicker, because then I don't have the photo kit,

00:11:33   photo authorization issues, and having to ask people for permission to see all of their pictures,

00:11:38   which I don't want to get into. So I just want to use an image picker, and it works.

00:11:42   But I get the image, and it's giant, and so I need to zoom in and pan around,

00:11:46   and I get it working in SwiftUI. But it's one of those funny things,

00:11:52   and this is just an interesting note that I've been discovering in my work with SwiftUI and from UIKit,

00:11:59   is that there's a lot of these things in UIKit that they're just, like,

00:12:03   I imagine ultimately things like these do these little coefficients that are built into things like UIScrollView

00:12:09   and the way the UIPinchGestureRecognizer works and things, where I just could never get,

00:12:14   with a magnification gesture and the pan gesture inside of SwiftUI, things to behave like you would expect them to.

00:12:21   And it was hard to explain, because it was working correctly, in the sense of,

00:12:26   when I move my fingers out, it changes size, but it didn't change the right speed

00:12:32   or handle the ramp up and the ramp down. All these things just felt off.

00:12:38   And so, in the end, what I do is I just ended up having to get rid of all the native SwiftUI stuff,

00:12:43   and I just ended up wrapping some UIGestureRecognizers inside of a SwiftUI view

00:12:49   and end up using that as a way to use it inside of SwiftUI.

00:12:53   And that worked, except for all kinds of weird, just sort of like,

00:12:58   getting the state management of that working correctly just drove me insane.

00:13:02   There was probably a good four-hour period where everything seemed like it was working,

00:13:06   but nothing would update correctly.

00:13:08   And it turned out it was one of these weird things that had happened to me many times in SwiftUI,

00:13:12   where the wrong part of my view hierarchy's state data was getting updated.

00:13:19   And so, some of the parts were getting these change events.

00:13:24   So when I'm saying, "Move the image to the left, move the image to the right,"

00:13:27   some parts of it were getting it and some parts weren't.

00:13:30   And so it was just breaking in weird ways.

00:13:32   And so a little pro tip that I came up with with this, which is how I ultimately saved my sanity,

00:13:37   was I added an extension to color that you could just ask it for a random color.

00:13:43   So I just say color.random, and it just gives me a random color every time.

00:13:46   And I would wrap a bunch of my components at different levels of the view hierarchy

00:13:50   with .border, color.random.

00:13:53   So essentially, all the elements have these random colors, and if everything's changing correctly,

00:13:58   as I swipe my finger across, everything should go crazy and there should be these rainbow colors everywhere.

00:14:04   Every time it updates, it gets a new color.

00:14:07   And that would let me identify which part of the view hierarchy wasn't getting updated,

00:14:10   because its border would stay the same and wouldn't have the rainbow party.

00:14:15   So pro tip, if you ever need to debug SwiftView object refresh stuff, that's a great way to do it.

00:14:25   I wish I had thought of it earlier in the process rather than just banging my head against the wall for a long time.

00:14:32   But in the end, I was able to get it to work.

00:14:36   I was able to get the pan and zoom system to work reasonably well, and I ended up having to fall back to UIKit,

00:14:47   which I feel kind of sad about. There's part of me that wishes I didn't have to go back to UIKit, but I did, and so it's fine.

00:14:52   And in general, it worked reasonably well. There are a few areas that it was a bit funny,

00:14:58   because the number of different coordinate systems that I'm having to deal with in this app right now is too many.

00:15:05   It's a short reality, because I'm dealing with the UI views, coordinates and coordinatespace,

00:15:12   and then I need to translate that into the images coordinatespace, which is slightly different than the SwiftUI imagespace,

00:15:21   because then I need to ultimately, and this is the one that really, really hurt my head,

00:15:26   and this is where it made me laugh a little bit, because ultimately I have this image, I have it in a SwiftUI view,

00:15:32   and I can scale it up and down and I can move it around, so I can orient it onto a different part of the image.

00:15:36   And then I need to ultimately render that into a smaller, essentially the cropped version of that image,

00:15:42   is what I'm ultimately needing to render. But trying to get the coordinate system transform and the things like,

00:15:49   I need to essentially take the image and I need to actually be trying to render it at an origin that is outside of the frame that you can see.

00:15:56   So if you imagine you zoom in on a picture, the actual origin of that picture that I'm rendering is way off to the top left, conceptually.

00:16:04   And that way you can see just the little part in the middle in the window that you actually can see.

00:16:09   And trying to get that logic right, it made me laugh because it was essentially like machine learning programming,

00:16:14   is what I felt like I was doing, where I had no idea the right combination of negatives and positives,

00:16:21   and adding offsets and subtracting offsets, and all the things that I needed to do,

00:16:25   and when I need to multiply by the difference in aspect ratio between the two things, and all this.

00:16:30   I had no idea what I was doing at some point, and so I just kept trying. I just tried every possible version,

00:16:35   and then just like, once I got whichever version seemed slightly better, I would sort of like dial down into that,

00:16:41   and it's like this genetic algorithm I'm running where I just keep trying every possible solution of plus and minus,

00:16:46   and add and subtract, and divide and multiply, until it worked.

00:16:49   And eventually it worked, and it sort of is not a very viable way to do programming,

00:16:54   but in this particular case it actually sort of worked.

00:16:57   And that was just one of those funny, like, okay, once it worked, I'm like, great, it worked.

00:17:03   I don't actually know all the math that I did to get there.

00:17:07   I don't really understand why that's the particular math that I need to do, but I can do it.

00:17:10   I can take the giant image, you've moved it around with your pan and zoom gesture, and I can render it now into an image.

00:17:17   Hooray. It worked. But that's not the end of the story.

00:17:21   Well, we'll hear the shocking conclusion next, but first we are brought to you this week by Pingdom.

00:17:28   Do you have a website? Does your website have things like a shopping cart, or registration forms, or contact pages?

00:17:34   If you answered yes to any of these questions, you need Pingdom.

00:17:37   Nobody wants their critical website transactions to fail, because that means bad experiences for your users,

00:17:42   and could mean lost business for you.

00:17:44   Pingdom can monitor these transactions, in addition to monitoring your website uptime in general.

00:17:50   So, transaction monitoring will alert you when things like cart checkout, or form submissions, or login pages fail,

00:17:56   before they affect too many of your customers and therefore your business.

00:18:00   Pingdom will let you know the moment any of these fail in whatever way is best for you.

00:18:04   You can customize how you're alerted, who's alerted, depending on the outages, conditions, and severity.

00:18:09   And Pingdom cares about your users having the smoothest site experience possible.

00:18:14   If disaster strikes, you will be the first person to know, so you can go fix it.

00:18:19   It is super easy to get started on Pingdom. I personally have used Pingdom for a long time,

00:18:23   and I'm using it like literally right now, this morning.

00:18:26   I keep checking Pingdom because I'm having a network connectivity issue with some of my servers,

00:18:30   and I can look at Pingdom and I can see latencies of all the checks from all around the world that they're checking it from.

00:18:36   So I can see like, "Hey, this server's okay when it's checked from North America,

00:18:39   but right now when it's being checked from Europe, it's seeing, you know, unusual latency."

00:18:43   And I can actually go be notified of stuff like that and go fix it.

00:18:46   It is great. Go to Pingdom.com/RelayFM right now for a 30-day free trial with no credit card required.

00:18:55   Pingdom.com/RelayFM. When you sign up, use code RADAR at checkout to get a huge 30% off your first invoice.

00:19:03   Once again, Pingdom.com/RelayFM, code RADAR for 30% off your first invoice.

00:19:08   Thank you to Pingdom from SolarWinds for their support of this show and RelayFM.

00:19:12   Okay, so now that we left our hero, he had just finally gotten the basics of the system to work.

00:19:19   And so what does he do? Does he celebrate? Does he say, "Good job, Dave," and move on?

00:19:23   No. He decides, "Let's make this problem 10 times harder."

00:19:26   So what I just had the thought of is like most people, I think, who are going to use this feature

00:19:31   are going to take a picture probably of someone they care about and they want to show that person on their watch face.

00:19:37   So what do I think they do? Oh, I'm going to use the vision framework to identify faces in those images

00:19:43   and have the ability for you to zoom in automatically to the person.

00:19:48   If I detect people in a picture, I want you to be able to automatically zoom into those people's faces

00:19:53   and have that be the center point of the complication.

00:20:01   Seems like that should be reasonable, right? Like there's a whole framework for this.

00:20:04   Every year Apple talks about how amazing their machine learning and stuff is, and I've never had a reason to use it.

00:20:10   And I was like, "I finally have a reason to use machine learning kind of stuff in my application, so let's try it."

00:20:16   First thing I discovered, which broke my heart, was that the vision frameworks don't work in the simulator,

00:20:21   which doesn't make any sense to me because I have an M1 Mac, so I don't understand why they don't work,

00:20:27   but apparently they just don't. And so it's Apple Silicon. I don't know why these don't work, but they don't.

00:20:34   So that is always just a bit discouraging because a lot of this stuff, when I'm doing this iteration,

00:20:39   when I'm working kind of rapidly and trying to just work through stuff, and as we'll get into,

00:20:43   there's a whole other round of my machine learning-based programming where I'm just flailing wildly trying to get something to work.

00:20:49   It's so much nicer when it's in the simulator and not on the device, but that's not the case.

00:20:54   So I get on the device, and the nice thing, the vision framework is actually, it's relatively,

00:20:58   from what I'm trying to do with it, it's relatively straightforward. You give it an image,

00:21:01   it has this really weird call structure where you create a request, and then the request has to be passed to a request processor,

00:21:10   and there's all kinds of weird error states, but once I worked through that, I could get it so I could reliably say,

00:21:14   "Here's a UI image. Give me the rectangles where there's a face, if you see one, or nothing, if there was no face,

00:21:22   if it's a picture of a non-face, anyway." But the strange thing there is, of course,

00:21:28   I have to add in one more different coordinate system. So the vision framework does something that I've never seen anywhere else,

00:21:35   completely broke my brain for many hours, is it gives all of the rectangles in a normalized version,

00:21:43   where all of the values go from zero to one, and it's proportional inside of the width and the height of the image.

00:21:52   I don't know why, but that's what it does. And so everything you do has to be denormalized back into image space

00:22:01   before you can deal with it. And in a weird way, the y-axis seemed like it was flipped in a weird way,

00:22:11   and so even just trying to know if I was using it, which always drives me crazy when I'm working with something like this,

00:22:17   I wanted to see if I'm using it correctly. And so I give it an image, and I get back a rectangle,

00:22:23   and the rectangle is from 0.268, 0.563, and then it's 0.1 and 0.1. And it's like,

00:22:32   "What do I do with this? How do I know if that number is correct, if I'm using the framework correctly,

00:22:36   if the zigs are good?" And it took me forever to be able to reconvert that back into something that I could then pass to SwiftUI,

00:22:42   to then overlay an image, and then move that image inside of its geometry reader so that it was in the right place.

00:22:48   And eventually I got there. Eventually I didn't lose my mind, but there were a few touch-and-go moments there

00:22:55   where trying to get to that point turned out to just be so tricky. And I get why, in some ways, they're doing it that way,

00:23:03   but at a certain level, I'm like, "Why are you doing this? Why are you inventing this totally new..."

00:23:07   I've never seen any other UI framework in all of iOS, and I've been doing this for a long time,

00:23:12   where it's all normalized. It's a CG rect, but it isn't actually a CG rect, because it doesn't represent x and y coordinates.

00:23:20   Essentially it represents percentages of the image. But anyway, I finally got down there, and it was one of those things.

00:23:28   You know it's a good feature when the first time you do it, it just works perfectly, and it just does exactly what you want,

00:23:34   where I load up a picture of me and my family, and immediately it just zooms right in on our faces,

00:23:42   and I have a little preview window that shows how it would look on the watch, and it's just like, "Yep, that's perfect."

00:23:47   It's really cool when it happens, where the idea is relatively straightforward, but I didn't need to do it.

00:23:54   Once I've done it, it's like any other version of this is completely insufficient, because trying to get it exactly zoomed in

00:24:01   and tanned around and look just right, if I can do it automatically, that's so much better, and it makes the feature so much better.

00:24:08   Other than having to make it so that I zoom out slightly, because the funny thing about the vision framework is it's face detection,

00:24:15   not head detection, and so it gives me the rectangle that is essentially eyebrow to just below your lips, mid-chin to eyebrows,

00:24:25   and so the first version zoomed to there, and so everyone looks a little bit crazy, but I could zoom out a little bit,

00:24:32   and I just had a buffer to it, and finally it works, and then I had to convert that into zoom and pan calls,

00:24:39   as though it had happened to the pan and zoom thing, so that the UI pinch recognizer, which is deeply wrapped way down inside of a SwiftUI view,

00:24:47   can have the correct starting point when it starts to move again, but it all worked, and that was the end of this fun,

00:24:54   and it was interesting because, and at this point I'm still working entirely inside of a test project,

00:25:01   because I will say I highly recommend using test projects for this kind of thing, where I didn't worry about all the overhead that you have to do

00:25:10   if you actually incorporate this into your main app, because the number of times I just had to keep build and run, build and run,

00:25:17   because it was only building essentially one file, the build process was super fast, the run process was super fast,

00:25:24   it was easy to change the entry point into the application, so that it was always exactly what I was trying to test at a particular time,

00:25:30   and now I'm just going through the process of actually backporting essentially this test project into the main WatchSmith app,

00:25:36   and that so far has been going reasonably well, I've worked out most of the problems,

00:25:42   and now comes the only things that are going to be slightly annoying, or is always the thing when you're prototyping something,

00:25:49   you make a lot of assumptions, you make a lot of guesses, things that you don't necessarily have to worry about,

00:25:54   where in this version I need to make sure that I'm not creating files, sending them over to the watch,

00:26:00   and then never deleting them if they get replaced, for example, so I have to do that kind of bookkeeping and cache management

00:26:06   to make sure that I'm not just slowly gobbling up all the user space over time, and things like that which I didn't have to worry about in the test project,

00:26:13   but overall, that's where we are, a few more gray hairs as a result of this feature, but I finally got there.

00:26:19   Wow. Yeah, watch connectivity and trying to build apps on the watch is such that I'm kind of surprised that you have any non-gray hairs left,

00:26:30   given your career choices of focusing so much on the watch. I feel like this is a great example of, A, something that seems like it should be fairly straightforward,

00:26:42   and then realizing doing it at all, let alone doing it well, has some surprising pitfalls that you might have thought back in the beginning when you think,

00:26:52   "How hard could it be?" But also, I think this is a good example of tackling somewhat hard to do correctly things like this is a pretty good business model to attempt,

00:27:06   because here's something that, it seems easy, and it seems like a lot of people want this, so there's demand for this,

00:27:14   but doing it right is a little bit beyond what most iOS programmers are willing to tackle or able to tackle.

00:27:22   And if you make a career out of doing, you know, mostly easy enough things so you can handle it as an indie,

00:27:31   but occasionally tackling something that's kind of tricky like this and doing a really good job with it,

00:27:36   you can build a business on that, because there will be way less competition that is willing and able to go through all this hassle and get this right,

00:27:45   compared to, you know, you being willing, just wanting this to exist so badly that you're willing to go through all these challenges to get there.

00:27:52   And so, that actually, as long as the market wants whatever you're doing, that's actually a pretty reasonable business strategy.

00:27:59   Yeah, and I think it's also, I feel like it's one of these opportunities to grow as a developer. Like, I really,

00:28:06   as much as I joke about how sort of like crazy making it was, I'm really glad that I know how the vision framework works now.

00:28:14   Like, I've never used that before, and it's really cool. And I do think it is kind of one of these funny things where I do think it puts a moat around my app,

00:28:20   because there are so many steps that you would have to go through in order to recreate this feature that it just makes it that much harder.

00:28:27   Not that I think anyone else is doing this, because I think in general, with complications,

00:28:34   I'm pushing the complications way beyond what I think it was intended for with WatchSmith.

00:28:41   But from a business perspective, I do think it is a good thing. And hopefully it's a good thing.

00:28:45   Hopefully people like this feature. I really have enjoyed having pictures of my kids on my watch, and hopefully the other people like it too.

00:28:51   But the process of putting it there, it turns out to be really uncomfortable of a journey to make it happen.

00:28:58   Which is, hopefully, yeah, it's a business move, and it makes me think of a lot of the things where you think of some of the choices you make in Overcast,

00:29:07   with your voice processing and the way that you've kind of held the line on, there's been several points that there was the easy way,

00:29:14   and then there was the right way of building the feature. And you could have decided long ago,

00:29:18   "Oh, I'm not going to do any voice processing of things on the watch." And it's like, you decide to know,

00:29:22   everywhere you listen in Overcast, you want it to sound good. And as a result, that makes your life harder, but it makes the app better.

00:29:30   And I feel like that balance between the hard thing and the right thing, in this case, is definitely something that is not an easy choice always,

00:29:39   but it's usually a rewarding choice, and so I can certainly recommend it nevertheless.

00:29:43   Thanks for listening everybody, and we'll talk to you in two weeks.

00:29:46   Bye.

00:29:47   [ Silence ]