Category Archives: Software Development

Habits 2.01 – Start Small

When I announced Habits 2.0, a fellow Western MA Hackathoner, Molly McLeod, reminded me of BJ Fogg and his Tiny Habits method.

Only three things will change behavior in the long term.

Option A. Have an epiphany
Option B. Change your environment (what surrounds you)
Option C. Take baby steps

I had first learned of BJ from Ramit Sethi’s interview with him. The moment I remember most clearly was his method to start to floss. He suggested that you only commit to flossing one tooth each day — if you did that to start and internalized that that was success, you’d start flossing more eventually. I started doing this, and while I’m not a perfect flosser, I do floss most of the time. That convinced me that baby steps were a real thing. If you have any interest in this, sign up for a (free) week-long tiny habits session with BJ.

So, with Habits 2.0 out the door, I am going to plan 2.01, a baby step improvement of 2.0 by just doing a very small amount of work each day on it.  I joined BJ’s tiny habits for this week and he recommends adding a 30-second behavior triggered by something you will definitely do each day. I decided that once I put my dinner plate in the dishwasher, I will sit at my desk and run the Simulator. Then, I will celebrate that as a success (and mark it done in Habits, of course).

I have been doing that for about 6 days, and each day when I run the simulator, I usually test Habits out a little, and write up a Trello card or write a small test.  BJ’s advice is to keep it completely pain-free and small and to not worry about building on the tiny behavior. Still, in this time I have managed to make a bunch of small improvements to Habits, which I look forward to sharing soon.

Apply Jobs-to-be-Done (JTBD) to Recruiting

Jobs-to-be-done (JTBD) is a theory for what causes us to buy things. The quick description is: jobs arise in our lives and then we hire products or services to do them. The key insight is that the job attributes should be used to guide product development, not the customer attributes. Here is Clay Christensen describing the concept if you haven’t heard it before.

[youtube_sc url=”″ start=”1520″]

This theory was publicly introduced in his book, The Innovator’s Solution, but is being popularized by Bob Moesta and Chris Spiek in Switch Workshops and soon at the Business of Software conference.

I was lucky enough to participate in some training with Bob and Chris, and so I often think of Jobs theory whenever I wonder why people do anything, and recently I’ve been thinking about recruiting (yes, Atalasoft has a job opening for a software developer in our marketing department to help evangelize our products).

Now, when hiring we naturally think of the job we need done, and of course, we are explicit about that when writing the ad, evaluating resumes, interviewing, ultimately hiring someone. The job has hiring criteria and we use it.

But, at the same time, potential applicants also have a job-to-be-done in their lives, and they are judging us with hiring criteria. In this case, we usually fall all apart. We try to write job descriptions that sell ourselves too, but, frankly, I’m not sure they actually address the applicant’s criteria.

In their workshops, Bob and Chris teach how to find out why people switch from one product to another by interviewing people that have done it already. How many of us have interviewed our recent hires to find out why they switched from their old job to ours, how they found out about it, what happened in their lives to cause them to want to switch jobs? If we did that, I think we’d find that we’re advertising in the wrong places, not emphasizing the right strengths, and generally not making the applicants know that we meet their hiring criteria.

I’m sorry to say that I haven’t done this, so I don’t really know what needs to change.

In any case, I’m going to be thinking and posting more about this — hopefully trying it in practice. In the meantime, if you are a web programmer (preferably in .NET or Java), have at least 5 years of experience, and you want to work in a developer tools company’s marketing department, creating technical content (demos, blogs, articles, sample code, tutorials, brochures, etc) to help developers learn more about our products, get in touch with me. At Atalasoft, you’ll work with smart and hard-working colleagues, where we have an enormous amount of respect and trust in each other. Some of the best programmers in Western MA have chosen to work here, and we can’t wait to meet you.


Habits 2.0

Back in 2008, I made a simple iPhone app called Habits to help me remember to do some recurring tasks that were not a regular schedule. I made a few updates early on, but it basically did what I needed it to do, so it’s been a while since I have looked at it.

Habits App IconA couple of weeks ago, I decided to refresh its look in anticipation of iOS 7. Unfortunately, an app compiled with iOS 6 doesn’t automatically pick up the new look — at the very least, you need to recompile. Instead, I decided to design something custom that would look good now and feel at home on iOS 7. While I was at it, I updated the icon using the iOS 7 app icon grid.

You can see some screen shots on the Habits documentation page, and if you want to buy it, Habits is 99 cents on the App Store.

Here’s a full list of everything that I had to do for 2.0 in case you’re a developer with an older app and want to see what you might be in for.

  • Converted to an ARC app
  • Moved lots of properties to auto-synthesize
  • Updated deprecated APIs to iOS 6.0 versions
  • Skinned the tables, mostly with custom cells
  • Added a pan gesture to the front-page cells (try moving them to the left for a short-cut)
  • Supported local notifications and badges (requiring a new settings page)
  • Made a new icon
  • Updated my Google Toolkit unit testing to Xcode built-in unit testing (which was gratefully, very easy) — the main issue is dealing with unit-testing’s idea of the document folder
  • Updated all button and default images
  • Updated in-app help
  • Converted my svn repository to git
  • Added database migration to support the settings (this app uses sqlite API directly)
  • Refactored a lot of code, mostly in the database, view controllers and custom cells, to share more code.
  • Fixed a bug in the calendar to support iPhone 5 size better.
  • Updated App Store listing, web page, made this post, etc.

Don’t assume ARC solves all of your memory problems

You should absolutely be using ARC in your iOS projects, and if the project predates ARC, go ahead and use the refactoring tool to get it to ARC. It really doesn’t take long and you’ll end up with a more stable app that will be easier to maintain.

That being said, you can’t completely ignore memory management. You can still get EXC_BAD_ACCESS, Zombies, leaks, etc., even with ARC projects. Here are some things you should know

  1. ARC is not garbage collection. It statically analyzes your code and then puts in release and retain calls where they are needed. It’s still susceptible to a retain-cycle — two objects with references to each other. You can still have references to dead objects.
  2. If you have a retain-cycle, a common way to deal with that is to make one of the properties weak (which you should probably do), but now that reference is susceptible to becoming a Zombie. A weak property will not call retain on the object, so when the object is deallocated, it would then refer to a dead object. If you have weak properties and get EXC_BAD_ACCESS, go reproduce it under the Zombie instrument.
  3. Under ARC, you cannot use autorelease any more, but the calls into non-ARC libraries can (and do, especially iOS frameworks). This means that you sometimes need to use your own autorelease pool. Under ARC, use the @autoreleasepool keyword to wrap areas where autoreleased objects are created that you need released before you return back to the thread’s main pool.  If you see leaks of objects in Instruments that you don’t alloc or hold onto, and use threads, add in @autoreleasepool blocks.
  4. Don’t use non-ARC code in your project by copying the source in. Build them in their own Xcode projects and then use the resulting .framework or .a in your project. It’s likely you wouldn’t be able to anyway, but just in case. (if you happen to be MRC — then really don’t copy ARC source into your projects — this will usually be compilable code, but will leak like crazy)
  5. Test your code under the Zombie and Leaks instruments — especially if you use bridging, weak references, or are in any way managing retain-cycles (breaking them yourself without weak).
  6. It’s rare, but I ran into a bug in the iOS framework that didn’t retain Storyboard created Gestures correctly in a tabbed-app. It was my first big ARC project, and it didn’t even occur to me to check for Zombies, but that would have pin-pointed the issue right away. Rule of thumb, the underlying code is still normal retain/release/autorelease based — debug it the same way you would have with Manual Reference Counting.

Further Reading:

Get iPhone programming tips in your inbox with my Beginner iPhone Programming Tips newsletter.


Man of Steel Review: Kryptonian Display Technology

SPOILERS for Man of Steel ahead

Display technology is a recurring theme in my limited perspective reviews. For Oz, I wrote about live display on smoke, and for the original Alien, I came up with a theory for why you’d have such high DPI green screens in the future.

Man of Steel offers a similar conundrum, as the display technology for Krypton is a flying pinscreen. It’s monochrome, it’s extremely low DPI, and of course it’s designed to look good on the screen, especially in 3D.

I have looked for some stills of this, but can’t find any. If you’ve seen the movie, the technology I’m talking about is what showed Kal-El in-utero, flew next to Jor-El in his escape, showed Lara’s head when she warned Jor-El to look behind him, and presented Artificially Intelligent (AI) Jor-El’s history of Krypton to Superman.

Like I said, the driving force behind this technology being in the movie is undoubtedly because it looks good on screen. But, as always, my review of the in-movie technology is based on the fictional world that we are presented with, not what it means to us as viewers of the movie.

Here’s what we know about this technology:

  • It needs to work in totally wireless flying displays, perhaps even in military contexts
  • It appears to be two-way as Lara can see what is happening on the other side
  • There seems to be a range of quality and size

It makes sense if this started as a military-use display. When Jor-El is riding the flying mount, and the display is flying next to him, it feels like this is common use-case. This display, made of metal “not on our periodic table”, would make this a nice piece of battle-hardened equipment.

This use-case also explains the low-DPI and monochrome you get in this context — this device needs to conserve power and be real-time in low-bandwidth situations. When power is readily available and you are connected (like in Jor-El’s “History of Krypton” presentation), the display becomes very high-quality.

And, true to Clay Christensen’s innovation theory, we expect that an innovative product to be worse on conventional measures, but better on new ones. This display shows 3D to the in-movie characters with no glasses or tricks because the display is actually in three dimensions.

The movie has another display, a perfect 3D projection (used to show AI Jor-El), but this seems to only be available inside of ships, so it’s not appropriate for outdoor military use-cases. I’m assuming that Jor-El didn’t use it for his presentation because Krypton also has advanced presentation theory and realized the the graphic/old-school look added a certain something.

The Seed Bank Hackathon Story

I spent last weekend at UMass with over a hundred fellow hackers to work on local challenges as part of the National Day of Civic Hacking. The event (Hack for Western Mass) itself was masterfully run by an amazing group that I hope to work with again.

The team I was on successfully completed our goal to get the Hilltown Seed Saving Network  a webapp to manage their decentralized seed bank.

Here’s what worked for us

We started before the weekend. Using the event’s wiki, Rosemary (a member of the network who knows HTML, but needed help on the back-end), Beryl (a CIT professor at Elms College) and I connected. We had a thorough discussion in the week before the event, and it became clear that Beryl and I could collaborate in python using Django to do this. Beryl knew python and did Django tutorials to prepare, and I have a few small Django backed sites.

We were a small team with complementary skills. At the event, we picked up Sheila, a new team member who set up our Facebook and Twitter pages, as well as training Rosemary on Hootsuite so that should could manage interactions.  During the event, a seed swap was executed using Facebook — talk about a minimum viable product! Beryl and I worked on the site, and Rosemary worked on our HTML template. We were a small, but efficient and effective team.

Rosemary came with wireframes. Here are the login and the add seed wireframes. She brought about a half-dozen more. Here’s the production version of login and add seed.

We set up a system to keep going. By Saturday night an 80% functional app was in the hackforwesternmass/seednetwork repo on GitHub. Since the event there have been dozens of small commits. It’s been really fun working with this team, and we have systems set up to work together.

We got to production as fast as we could. One of the organizers, Andrew, gave me a crash course in Heroku, and I had it done Sunday evening (Hilltown Seed Saving Network seed bank production site). Next time, I would do this as soon as we had anything ready to go.

We stayed focused. By Sunday, with only four hours or so before the presentations, there were a lot of possible distractions (incorporate maps? go on community access TV? prepare our presentation!) We tried to keep the end-goal in mind — get the Hilltown Seed Saving Network a functional app. We might revisit some of those ideas later, but having a clear goal made it possible to ignore everything else.


Introducing fishbike: Pure procedural programming

As an aspiring FP connoisseur, I spend my free time thinking about higher order functions, data-ing all the things, and hating on blub. One of my favorite new twitter follows is @jessitron (seriously, just go watch everything you can find, especially Functional Principles for OO Development).

In any case, she recently tweeted:

Which got me thinking — what if the procedure was pure? Then, you could replace it with nothing! Talk about power.

On my way to work today, I banged out a prototype, and so, I proudly introduce fishbike — pure procedural programming. Admittedly, uses are limited.

You can use the fbc command as either an interpreter or compiler, so, first create a text file called primes.fb (you can use touch). This is a fishbike program to find all primes not divisible by themselves. Since this is a procedure, it can’t return anything. So, if it finds any, it will do some side-effect (update your database, tweet the answer, email your mom, etc). Run it with:

fbc primes.fb

Or compile it with

fbc primes.fb > primes
chmod +x primes


Oz: Review of wizard projection technology

This is the latest in a series of limited perspective movie reviews. These reviews, inspired by a Letterman bit, look at only one narrow aspect of a movie, related to software engineering or software business.

SPOILER ALERT: These reviews assume you have seen the movie.

Here’s InceptionBreaking Dawn, Star Trek, and Alien.

Oz is set in 1905, so it doesn’t have much advanced technology. I researched the novels of Frank L. Baum, and I was surprised to learn that he wrote a novel that introduced the idea of augmented reality. Also, in the Wizard of Oz novel, the emerald city isn’t made of emerald (as it is in the original movie and this one), but instead, Dorothy and her companions are given glasses to wear that make it look as if it is. Unfortunately, since my review is limited to Oz, the movie, I can’t write up a comparison between that and Google Glasses.

Instead, I’ll concentrate on the one big use of advanced technology, the wizard’s “ghostly head” effect. If you remember the Wizard of Oz, we first see the wizard as a ghostly head appearing in smoke.

Wizard of Oz

Later, this is revealed to a projection from a complex machine operated by a man behind a curtain.

Wizard revealed

In Oz, we get a little insight into this machine. It’s implied that it was inspired by the Projection Praxinoscope.

Projection Praxinoscope

He combined the idea with Phantasmagoria, which is the projection onto smoke.

The thing is, in both of these technologies, you need to put the image on a transparent slide to shine light through. In the wizard’s case, he was able to project his live head (not a series of still images). I see no reference to this technique anywhere, but it’s a plot point that he’s being helped by The Master Tinkerer (who, in the novels, created the Tin Man), so we can assume that he could invent something.

The only thing I can think of is using a one-way mirror somehow. When I researched that, I found Pepper’s ghost:

Pepper's ghost

In this case, you are looking through a view-port (red rectangle) at an angled one-way mirror (green rectangle). The left area is out of view, and shows up in the mirror when it is lit.  You make the ghost appear and disappear by raising and dimming the light in the off-stage area as compared to the area through the mirror.

So, it’s not a projection, but I could imagine a master tinkerer and Oz (who is himself a master in prestidigitation), could figure out some way to combine these technologies (all available in the late 1800’s) to a new live smoke projection effect.

What I can’t imagine is how the older wizard is able to upgrade this to show the green, bald head instead of his own. is Bring Your Own Back-end (BYOBE) for mobile apps

When I read this about Climber (a new iPhone app that lets you post short videos to

Our video pages simply rely on post data to retrieve links to video files contained in personal file storage. If a user chooses to delete their post, or even just delete the video file in their file storage, then it can no longer be viewed on our website.

It struck me that the full cost of the back-end of this app was being paid for by the user. Until this, developers typically paid for a back-end or relied on free services.

When the developer paid, they had to deal with their app having a low, fixed life-time value, but unbounded costs. They either added a premium subscription service (Evernote), in-app currency (mostly games), or ads. Another common solution was to get acquihired before it imploded and have the new owner assume the costs (Instagram) or shut down the app (nearly everything else, but recently, Summly).

If they relied on free services, then they risked the service going away. Mobile RSS readers that treated Google Reader as a sync-service now have to either create their own or see what develops. Twitter client developers got hit with limited user-tokens. is offering another choice — Bring Your Own Back-end (BYOBE) — where they sell the user on the benefits of a backend and deliver less value to the developer (for a lower cost).

BYOBE isn’t new with Salesforce users have Apex, where they can find 3rd party apps that run on Salesforce servers. Apex app developers do not have to build out infrastructure for their applications if they can stay within Apex guidelines. In some sense, Evernote premium is also BYOBE for 3rd party applications. In fact, any app developer who solves their business model issue with a subscription service, can also transition to being a BYOBE provider. I’d put Github, Dropbox, and many others into this category.

But, is nearly a pure-play BYOBE and is planning on something more general purpose than what we’ve seen. In the founding documents of, Dalton Caldwell wrote:

As I understand, a hugely divisive internal debate occurred among Twitter employees around this time. One camp wanted to build the entire business around their realtime API. In this scenario, Twitter would have turned into something like a realtime cloud API company. […] I think back and wish the pro-API guys won that internal battle.

The price for developers is a flat $100/year. gets its variable revenue from the app’s users as they must subscribe in order to use any app on top of the infrastructure. It’s tempting to think that they are paying for Alpha (their Twitter clone), but that’s just an app built on the infrastructure — they are paying for API usage, not Alpha. is not a paid service for mobile application developers — is a paid service for mobile application users. The closest thing I can compare it to is iCloud, where users pay for iTunes Match or more storage, but developers get a syncing API to build on (or will eventually, when it works).

The benefits to users are clear (they are now paying, so they accrue some value)

  • They control their data
  • They aren’t sold to advertisers
  • They can plan on some sort of longevity and consistency

Developers, who now have lower costs, also get less value.

  • They give up control of the user and user data
  • It would probably be harder to independently charge the user another subscription

They do get a service they can count on, but they could have that with Parse, Kinvey, AWS, or any number of paid back-end as a service companies. Most of the developer benefits are over free services, which I don’t think make sense to build on.

I certainly see why I’d want to be an user based on this — but, since I don’t want Alpha (or message feed based services), it will make more sense when the app market is more diverse (not a bunch of Alpha clients).

As a developer, this model appeals to me because I think of this space more like a hobby — I would definitely forgo control to not have to think about or pay for the back-end. It would be even more interesting if had a payments option or revenue share. Rather than Twitter’s limit of 100,000 user tokens, is incented to reward a developer who gets that kind of traction. This brings costs to developers even lower (perhaps negative), and transfers value to themselves and users (they keep control of user payment data, and users have one bill).

I certainly don’t think this is the right mix of values/costs for every user — the key will be if the user thinks of themselves as having paid for an app and then get the services (and the other apps) along with it. Then, each different app category brings in different users. For example, it feels like Alpha costs $36/year, but what if you paid $36/year for your (let’s say) project management app, and just got Alpha for free? If Alpha is ever to have an enormous user-base (who all still pay for, it has to be because they think they are paying for apps.

If there’s any kind of model for this, it’s the fact that I buy a data-plan for my phone and bring it to my apps. App developers do not have to sell me a data-plan. I buy electricity for my toaster, water for my shower, gas for my oven — consumers are no strangers to buying infrastructure.

Come to think of it — is this the natural order? The main alternative is turning out to be infrastructure subsidized by ads.

Alien Movie Review: Display Technology

On my work blog I have a series of limited perspective movie reviews. These reviews, inspired by a Letterman bit, look at only one narrow aspect of a movie.

SPOILER ALERT: These reviews assume you have seen the movie.

Here’s InceptionBreaking Dawn, and Star Trek.

I watched Alien with a friend recently and was struck by [the ship computer] Mother’s display. This (SPOILER) review has the best (SPOILER) image I could find — unfortunately, green text on black is particularly susceptible to JPEG compression artifacts. In the movie, the actual display is extremely high-res (perhaps “retina”), with no pixels visible on my friend’s large screen showing the Blu-ray edition. On the other hand, the display is a green screen and fairly small. It’s an odd vision of the future.

My guess for the choice is that this was the best they could do in 1979, but I’ll try to make sense of it in the context of the movie, which is made quite a bit harder by Prometheus. In Prometheus, there are full-fledged holograms, so I’m guessing they have color displays. It was only 29 years earlier, so it’s hard to explain a display regression that wouldn’t also affect space travel technology. The only explanation is conscious choice on the part of the ship builder.

Here are our clues:

  1. The computer doesn’t seem to be able to display anything other than text.
  2. It can’t receive any input other than keyed in text
  3. It has a natural language interface
  4. The computer is offline with respect to Earth

My guess, actual Earth computers of this era are 100% speech driven with no displays. This disruptive innovation has decimated the display market.

Like now, the voice recognition requires a connection to server farm to pull off. As a hack, when you have to go offline, they give you some of the AI client-side, but can’t understand speech anymore so they slap on a display.

There’s, of course, no such thing as a display with less than retina quality as that bar was passed a while ago. However, since displays aren’t used by mainstream tech any more, they had to use a batch of small displays from some niche supplier — perhaps a line of hipster, retro digital alarm clocks.