I am starting a new series of articles about Appium on my iOS content site, App-o-Mat.
In the first one, I cover the first step of getting the most out of GUI Testing: setting accessibility identifiers.
I am starting a new series of articles about Appium on my iOS content site, App-o-Mat.
In the first one, I cover the first step of getting the most out of GUI Testing: setting accessibility identifiers.
When I was in High School, I had a job in the accounting department of a non-profit. This was the mid-eighties, so a spreadsheet was a giant piece of green paper with cells drawn on them. The department had a mainframe and used computers to process invoices, but a lot of accounting work was still done on paper. I helped file that paper.
At some point, they got a couple of PCs and they installed Lotus 123 on them, but no one was really using them. They knew that I was into computers, and so they asked me to transcribe the paper spreadsheets to Lotus 123 as more of a data-entry clerk.
This work was a seed which grew and grew. At some point I discovered SUM
and showed it to someone. I learned more about formulas and built spreadsheets that could auto calculate when the underlying numbers changed. You know, spreadsheet stuff.
By the time I left, my spreadsheets made extensive use of macros and more complex formulas.
In a very real sense, those spreadsheets were programs, and learning how to do that alongside with learning how to program made me better at both.
I think that a lot of people that are good at Excel could learn how to write conventional applications if they wanted to. Even if they aren’t currently writing macros, the main ideas of cells, formulas, range processing, grid-layout, and styling map pretty cleanly onto application software concepts.
If you follow a programming tutorial, at the end you will have some working code. You will hopefully have some idea of how it works.
But, you haven’t learned how to make this code. Programming is different from woodworking or playing a guitar in this way, because they require training your hands. Watching and repeating what you see will work to help you learn and improve.
In programming, it’s good to type fast, but most of what is happening when you make new code is in your head, and tutorials don’t train that part.
This is why I think Programming Tutorials Should be Vaguer. Programming is taking a nebulous problem and breaking it down, understanding it, trying to find building blocks, and then building up something that solves the problem.
The final form is concrete, but it is made by going through multiple phases where you took unclear ideas and made them clearer.
We need tutorials that ask more questions and provide fewer answers.
One of the advantages of podcasts is that they are audio-only, so you can listen to them while you do other things. I listen while doing chores, running, cooking, and driving. I mentioned in Soundtracks for Books that it’s hard to listen to something while you are doing effortful thinking.
But, just like a book soundtrack, you can pay attention to audio if the thing you are doing is really the same thing—something integrated with it.
I haven’t seen this much, but I think a big area for podcasts could be in guiding “learning by doing” tasks.
There are lots of video and text tutorials for learning programming. The issue is that you can be tempted to consume them without really doing the tasks yourself. And since you have to look at the book/video/blog etc, it’s hard to also be looking at your editor.
This kind of podcast could not just be played in your regular playlists though—you need to be at your computer ready to listen. It could guide you in a vague way, so that you have to think in order to do the tasks, not just listen while you exercise.
I’m not sure that a programming guide podcast is a good fit for very new learners, who still need a lot of help with the syntax. It would be better for learners who can write lines of code syntactically correct from a short-hand description.
Like I said in Vague Tutorials Would Help with Coding Interviews, I think getting good at taking spoken directions would help in coding interviews.
I wrote in Robotic Pair Programmers:
If search engines ever get eclipsed, I think it will be by something in the environment that just brings things to your attention when you need them. I want this most when I code, like a pair programmer that just tells me stuff I need to know at exactly the right time.
Kite, a code editing plugin, seems to be trying to go down this route. They have “AI powered code completions” for 16 languages in 16 code editors. Unfortunately, they don’t support Swift in Xcode yet. But, they do support Python, HTML, CSS, TypeScript, and JavaScript in VSCode, Sublime, and all of the JetBrains editors, so I could use it to work on App-o-Mat, which is a Django-based site.
In addition to code-completion, Kite also offers Copilot, which is a documentation pane that is synced to your cursor. Xcode already does this—the issue is that a lot of Apple’s documentation isn’t very complete. Kite only supports this for Python right now, but one addition to the standard docs is they link out to open-source projects that use the type or method you are editing.
Unfortunately, Kite doesn’t work on Apple Silicon, yet. It uses TensorFlow, which uses a particular instruction set that isn’t supported by Rosetta. Apple seems to be working on getting TensorFlow ported to M1.
So, I’ll have to wait to try it out. Very promising though.
The NERD Summit is going to be virtual again this year. It’s the weekend of March 19-21. There are tons of great sessions for beginners, so if you want to get into programming, you should take a look.
I spoke at the conference in 2017 about how to practice iOS development. As part of the talk, I open-sourced an app that could be used for conferences, which I forked into the conference app for NERD Summit. You can download it here (it’s been updated for 2021).
The source code for the conference app is on GitHub. Feel free to fork it for your conference. It’s easy to adapt — it uses a couple of google sheets as a data-source, so if you update the URLs to sheets in your account (make them publicly readable), you can show your conference events instead.
Note: If you got here because you googled “WCErrorCodePayloadUnsupportedTypes” I made a page called How to fix WCErrorCodePayloadUnsupportedTypes Error when using sendMessage which explains it better.
I’m working on an app to help me stay on an intermittent fasting routine. I wrote about it a little in Icon-first Development.
Fast-o-Mat is an iPhone app, but I want an Apple Watch complication to give me quick access to when my fast begins or ends. To do that, I need to get data from the phone to the watch.
I had never done this before, and I didn’t have the first idea of how it is done in modern iOS/watchOS development.
Here was my process
Then, at this point, all I do is look for the import and the basic classes I need and see how far I get from just basic iOS knowledge.
This tutorial is good at facilitating that.
So, this is very unlike my idea for vague tutorials, but I am not really a new learner.
There isn’t a new concept here for me to learn on my own—I understand the concept of asynchronous message sending. I just need to know what framework and classes to use for this specific task.
The issue is that this same tutorial is what a new learner would find as well.
I believe a they would get this all working by following the instructions step-by-step, but would they have learned it beyond that? Could they troubleshoot?
One thing that is not clear from the API or this tutorial is that Any
doesn’t really mean Any
in the message
parameter to sendMessage
func sendMessage(_ message: [String : Any], replyHandler: (([String : Any]) -> Void)?, errorHandler: ((Error) -> Void)? = nil)
I decided to just use one of my types there. It’s a struct with two TimeInterval
parameters.
The documentation says
A dictionary of property list values that you want to send. You define the contents of the dictionary that your counterpart supports. This parameter must not be
nil
.
And running it says:
errorHandler: NO with WCErrorCodePayloadUnsupportedTypes
And now I see that “property list” values are things that you can store in a plist (so, not my struct, just simple types or NSArray
or NSDictionary
). And yada yada yada, it’s a little more complicated when you want to do this for real.
This is all to say, sometimes you just want the code (like me) and sometimes you are trying to learn a new concept from first principles, and the same tutorial can’t deliver both (or should even try).
Having a driving question helps me find interesting, less-mainstream content that can help shape my thinking. One question I am exploring is how game-design can drive app-design (not via gamification). The presentation, Building a Princess Saving App, describes it perfectly.
This talk is about building learning and fun into your applications. If you’ve ever wondered how games work and how they can help you build better apps, this is the talk for you. Along the way, we’ll discuss a new philosophy of interaction design that is already creating a major competitive advantage for innovative application developers.
The slides describe how typical enterprise and Web 2.0 application designers might approach the problem of saving a princess vs. how Super Mario Brothers does it.
Along the way you learn the language of game-design and the STARS model. Well worth a read if you are interested in this topic.
If search engines ever get eclipsed, I think it will be by something in the environment that just brings things to your attention when you need them. I want this most when I code, like a pair programmer that just tells me stuff I need to know at exactly the right time.
When I’m in Xcode, there are so many times when I need information I don’t have. To get that information, I need to initiate a search. It breaks my flow to do this.
What I want is that information to just be in the environment.
One way this already happens with with code comments. In my source, I trust all of the editors, so I would like to see all of their comments and commit messages. This is actually possible if I turn on the Authors sidebar in Xcode.
But, what more could I get? Let’s say I index every Xcode project in GitHub, every iOS tutorial, every iOS question in Stack Overflow. Could that be distilled somehow and then shown to me at the right time?
One way that seems fruitful to me is rare API calls. There will be times when I am using an API that appears very infrequently in the corpus or my own repositories. In that case, it should infer that I probably need more help than usual and offer up a tutorial or the top Stack Overflow questions.
Another trigger might be my new comments. If I comment before I code, then it should be interpreted as a search query:
// Parse the JSON I get back from the data task
That should bring up links to likely API classes in the help pane (just like it would if I already knew the class). Maybe offer up imports to auto-add. Maybe offer a snippet. In Xcode it would be similar to the auto-suggested fixes for compiler errors.
This is just the beginning, and we can do a lot more. Whatever we do, we need to make sure that nearly every suggestion is useful, because we risk knocking the developer out of flow. Conserving flow should be the driver for how this works.
A few days ago, I wondered about soundtracks for books. I had an aside where I mentioned that game soundtracks are synchronized with the player’s actions (like a book’s would have to be).
That is also true of a non-game app. If an app had a soundtrack, then it would also be synchronized with behavior, state, situations, etc.
Apps often use system sounds. So, if you do something that isn’t allowed, you could get an error beep. Alerts similarly come with a sound. That’s been around probably since the first GUIs. Those aren’t soundtracks.
But, I’ve been trying to think about all apps as potentially games, or having game design drive the app design. So, that means sound design has to be part of it. In fact, I think it’s a tell that not being able to conceive of sound design for an app means that it isn’t using game-driven design. What’s the soundtrack to MS Word?
In Pokémon Go vs. Apple Workouts, I said that game-design doesn’t drive the Workouts app—it’s slapped on. And even though I play music while doing a workout, that’s not a soundtrack either.
But every workout app could have a sound layer to let you know what is going on. in Sprint-o-Mat, I give you a ding when it’s time to start sprinting. Apple workouts have pace alerts.
But, what more could you do? In AR opens up playability, I said that even mundane apps could become games with AR (e.g. Grocery List vs. Zombies). We have a kind of AR right now with just headphones, so maybe the right sound-design (more than music or system sounds) could make a workout more like a game.
So, if the game in Sprint-o-Mat is a race, here are some ideas for a game-design driven soundtrack:.