Category Archives: Software Development

Risks Could Have Mitigations That Also Have Risks

Most software development teams keep track of risks and mitigations throughout their project lifecycle. I think we often stop there and don’t really think about what it would mean to deploy the mitigation.

It would be frustrating for a project risk to manifest itself, but the mitigation is impossible to do because it needed prior planning or approval.

For example, let’s say you have a project with two broad phases. (1) a planning/architecture phase that would be done by a very small team (2) a long, mechanical phase that requires little coordination and is very parallelizable. An example of this might be converting a large front-end from Angular to React.

There is a risk that you underestimated how long it would take. Possible mitigations are:

  1. Add more engineers to the project (we are stipulating that that would help since coordination requirements are low).
  2. De-prioritize any other work your engineers might be assigned.
  3. Ask engineers to work longer hours and delay large blocks of time off.

These are if-then mitigations that you only do if a risk happens. If-then mitigations seem good, but the problem is that you are free to list them, and it seems that you have done something about the risk, but you really haven’t.

Let’s say we are in month 8 of 12 of the planned work, and you know you are going to slip to 16 months. If the work is important enough to care about that date and you want to deploy the mitigations, you must

  • Get budget approval
  • Find a source of engineers (contractors/internal)
  • Onboard the engineers to the project
  • Have management agree that other projects can be de-prioritized
  • Be prepared for engineers to leave the company

If-then mitigations have their own risks that must also be mitigated. For example:

  • Get budget pre-approval
  • Keep track of other projects engineers are on and keep in constant communication with their stakeholders.
  • Ask Engineers to plan for time off way in advance so that it can be accounted for in schedules.
  • Create project onboarding documents and plans.
  • Pre-identify engineers on other projects that would be moved to yours.
  • Pre-negotiate with an outsourcing company, and possibly include them in the project at the beginning of the parallelizable phase.

The benefit of these mitigations is that they can be deployed immediately, and so whatever risk they have should manifest early, thus ending the cycle of Risk->Mitigation->Risk.

Rewrite of Mitigating the Mitigations based on Old, Flawed Work is the Jumping Off Point

In Space, No One Can Grow Your Debt

Yesterday, I mentioned that Voyager’s software isn’t lost yet and is still running. I had explored some of the reasons that was a couple of years ago in Long-lived Computational Systems. Since today is the 54th anniversary of the lunar landing, I’ll stay on the space theme.

In Tech Debt Happens to You I wrote that

[…] the main source of tech debt [is] not intentional debt that you take on or the debt you accumulate from cutting corners due to time constraints. [It’s] the debt that comes with dependency and environment changes.

So, maybe that’s why Voyager could run for so long. Its hardware is in space, so it’s shielded from environmental concerns that would eventually corrode unmaintained metal. And no one can change anything else about its hardware.

I don’t know how much control NASA has over Voyager software updates, but with such low bandwidth and high latency, it could not change much. This affects debt in two ways. The first is that you can’t add more, but the second has to do with how your current debt grows.

Since the cost of software debt is the interest payment that comes when you try to change the software, if you never change anything, your debt has no additional cost.

Note: The title of this post is a play on the promotional tagline for Alien. For more thoughts see Alien Movie Review: Display Technology.

Software is Losable

When I listed what I thought of the Great Works of Software, I also offhandedly listed some great works of art: The David, Beethoven’s 5th, and Pride and Prejudice. One of the reasons I think of them as great is because they are old and still relevant. We haven’t lost interest in them, and also, we haven’t lost them.

Pride and Prejudice can be reproduced exactly as it was when it was first written in all the ways we care about, but we could have lost it early on if Jane Austen hadn’t published it. We still have The David after 500 years, but we could possibly lose it forever. We have already lost the first performance of the 5th, but we could make something similar from the music sheets.

In this way, software is most like music. We have source documents, but the real thing is inextricably tied to hardware. We play recordings or perform music just like we run programs—on hardware. The hardware and the performance is big part of what we think the software is. A book is also delivered on hardware, but we don’t consider the paper or the Kindle or the Audible file part of the work.

All of this makes software losable, because if we lose the ability to run the software, we have lost it.

On my list of the five great works of software, three of them are in active maintenance. We still have the other two (browsers and spreadsheets), but the originals by Tim Berners-Lee and Dan Bricklin are frozen in time and harder to run.

VisiCalc was written for the Apple II in 1979. I don’t know if you can run that version, but Dan Bricklin claims that the executable for the PC will still run on modern Windows under DOS (because Microsoft believes New Versions Should be Substitutable). I am sure you could get a DOS emulator even if a modern one can’t run it any more. We haven’t lost it yet.

There is a version of Berners-Lee’s WorldWideWeb client on GitHub, it was written for a NeXT machine. I don’t know how to run it, but there is at least one NeXT emulator that might be able to, so it’s also possibly not lost yet.

It’s not a “great work”, but I have run My First Real Program on a PET emulator because it was small enough to commit to memory. Almost all programs I have written between then and when I started working professionally are lost because they were tied to floppies that I didn’t keep.

The possibly longest continuously running software, Voyager, is very far away, but not lost yet.

Nebulas July 2023 Update

A couple of years ago, I wrote about designing a game that is the opposite of Asteroids. Last week, I found raylib, a C library for making games and decided to try making it (I also renamed it Nebulas).

Here’s what I have so far.

  • You are a triangle ship in the center of a starfield, which can help with navigation.
  • You can rotate (LEFT and RIGHT) and thrust (UP)
  • Space is infinite in all directions. You are always at the center of the screen.
  • There are nebulas in space around you moving.
  • You can go into them and suck their energy into your ship

Here’s a video of my progress so far:

Next

  • The color of the nebula is a clue to what power your ship gets from it. (e.g. Red is thrust)
  • The color of the ship is which gas you are currently using.
  • I will show tanks for each color so you can see how much you have of each gas.

The game is to survive as long as possible, conserving your gases. As long as you survive, you can explore.

A Little Linear Algebra Helps to Make Games

I started playing with raylib to make a game that is the opposite of Asteroids. Just to start, I need to be able to draw a triangle that is rotated by some angle around its center, and that’s simple if you understand transformations. Raylib does include a raymath module to do basic vector math, but not specifically what I needed.

So, again, I’m feeling The Unreasonable Effectiveness of Mathematics in Programming. I’m not sure you can make any reasonable game without some transformations.

raylib First Impressions

I just ran into raylib (because I’ve been reading a ton of HN blogs) and it’s making me dream of Programming With the Joy of a Thirteen Year-Old. All I wanted to do when I was 13 was make games and raylib looks like a fun way to do it.

I read the homepage, played a few games, and then read their source. Here’s what I love

  1. You write in “easy” C. I searched for pointers and signs of dynamic memory and found none in the simple games I read. I’m sure it’s there in more complex games, but they aren’t making you use pointers just to get started
  2. It runs in the browser. It’s C, so it’s expected that it would be cross-platform, but it can also compile to something that runs in the browser.
  3. No external dependency philosophy. Dependencies are just future tech debt.
  4. No (or very little) magic. It’s just a library. Games are mostly a loop of reading the controller, updating some state, and rendering that state. That logic is very clear in raylib code.
  5. Simple games are simple. I played three classic games and then read their source. They were each one file and followed similar logic.
  6. There’s is more to it. Once you progress from the simple stuff, it looks to be full-featured with other things you might want in a game library. But, you don’t need to use any of it at first.

I might play around and see where I can get with Nebulous.

Thinking Fast with Keyboard Shortcuts

The book Thinking, Fast and Slow [amazon affiliate link] by Daniel Kahneman describes our brain as having two thinking systems, a fast one and a slow one. The fast system is automatic and multitasking, while the slow system is methodical and single-focussed. When you are doing something complex, you are usually concentrating your slow system on the main problem, while the fast system can be doing several related (and even unrelated tasks).

This is the main reason I try to memorize and practice several keyboard shortcuts for the programs I use every day. I am trying to get as much of the mechanics of the code editing into my automatic, fast thinking system.

There is a common belief that mousing is faster than keyboard shortcuts, which probably originated with this AskTog article:

We’ve done a cool $50 million of R & D on the Apple Human Interface. We discovered, among other things, two pertinent facts:

  • Test subjects consistently report that keyboarding is faster than mousing.
  • The stopwatch consistently proves mousing is faster than keyboarding.

This is probably true for the general case and for when the user is learning a new UI. But, the study, Comparison of Mouse and Keyboard Efficiency, suggests that:

[…] for heavily-used interfaces, keyboard shortcuts can be as efficient as toolbars and have the advantage of providing fast access to all commands.

For me, the key is that shortcuts have to be able to be deployed with no conscious thought.

Programming often requires you to keep several interrelated thoughts in your head until you get the code written and working. For example, even for simple web UI blocks, you have to think of the semantic structure of the tags, the layout, and the style. To write that code, you will have to possibly jump through a few files and parts of those files. So, to keep from adding more cognitive overhead, you want to make the manipulation of the editor as automatic as possible.

This is something I think that can never be accomplished with the mouse, but is possible for a small set of shortcuts. That set should be the most common actions that happen while you are actively programming. The goal is to keep your slow system and short term memory focussed on the programming task at hand.

For me, that is:

  1. Cut, Copy, Paste, Undo, Redo
  2. In-file navigation with arrows and modifiers (including using Shift for selection)
  3. Find and multi-file Find
  4. Show the current file in the project navigator
  5. Open another file, tab cycling
  6. Jump to the definition of the identifier under the cursor
  7. Comment (or uncomment) the current selection

I do these commands all of the time. I often need to string a series of these commands together. Doing the equivalent without the commands risks engaging your slow thinking system and breaking you out of flow.

Observations on the MIT Study on GitHub Copilot

I just saw this study on GitHub Copilot from February. Here is the abstract:

Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.

The researchers report benefits to less experienced developers, which is at odds with this other study I wrote about and my own intuition. However, all of the developers were experienced Javascript developers, and not literally learning programming, which is where I think the more detrimental effect would be.

Using Zeno’s Paradox For Progress Bars

When showing progress, if you have a list of a known length and processing each item takes about the same time, you can implement it like this pseudocode:

for (int i = 0; i < list.length; ++i) {
    process(list[i]);
    // notifyProgress takes a numerator and denominator to 
    // calculate percent of progress
    notifyProgress(i, list.length);
}

One common problem is not knowing the length beforehand.

A simple solution would be to pick a value for length and then make sure not to go over it.

int lengthGuess = 100;
for (int i=0; list.hasMoreItems(); ++i) {
    process(list.nextItem());
    notifyProgress(min(i, lengthGuess), lengthGuess);
}
notifyProgress(lengthGuess, lengthGuess);

This works ok, if the length is near 100, but if it’s much smaller, it will have to jump at the end, and if it’s much bigger, it will get to 100% way too soon.

To fix this, we might adjust lengthGuess as we learn more:

int lengthGuess = 100;
for (int i=0; list.hasMoreItems(); ++i) {
    process(list.nextItem());
    if (i > 0.8 * lengthGuess) {
        lengthGuess = 2*i;      
    }
    notifyProgress(i, lengthGuess);
}
notifyProgress(lengthGuess, lengthGuess);

In this last example, whenever i is 80% of the way through, we set lengthGuess to 2*i.  This has the effect that the progress goes back and forth between 50% and 80% and then it jumps to the end.  This won’t work. 

What I want is:

  1. The progress bar should be monotonically increasing
  2. It should get to 100% at the end and not before
  3. It should look as smooth as possible, but can jump

An acceptable effect, would be to progress quickly to 50%, then slow down to 75% (50% of the way from 50% to 100%), then slow down again at 87.5% (halfway between 75% and 100%), and so on.  If we keep doing that, we’ll never get to 100% in the loop and can jump to it at the end. This is like Zeno’s Dichotomy paradox (from the Wikipedia).

Suppose Homer wants to catch a stationary bus. Before he can get there, he must get halfway there. Before he can get halfway there, he must get a quarter of the way there. Before traveling a quarter, he must travel one-eighth; before an eighth, one-sixteenth; and so on.

To do that we have to keep a factor to use to adjust the progress we’ve made (playing around with it, I found that using a factor of 1/3 rather than 1/2 was more pleasing).

int lengthGuess = 100;
double begin = 0;
double end = lengthGuess;
double iFactor = 1.0;
double factorAdjust = 1.0/3.0;
for (int i = 0; list.hasMoreItems(); ++i) {
    process(list.nextItem());               
    double progress = begin + (i - begin) * iFactor;
    if (progress > begin + (end-begin) * factorAdjust) {                   
        begin = progress;
        iFactor *= factorAdjust;
    }               
    notifyProgress(progress, lengthGuess);
}
notifyProgress(lengthGuess, lengthGuess);

The choice of lengthGuess is important, I think erring on too small is your best bet.  You don’t want it to be exact, because we’ll slow down when we get 1/3 toward the goal (factorAdjust).  The variables lengthGuess and factorAdjust could be passed in and determined from what information you have about the length of the list.

How to fix WCErrorCodePayloadUnsupportedTypes Error when using sendMessage

If you are sending data from the iPhone to the Apple Watch, you might use sendMessage.

func sendMessage(_ message: [String : Any], replyHandler: (([String : Any]) -> Void)?, errorHandler: ((Error) -> Void)? = nil)

If you do this and get the error WCErrorCodePayloadUnsupportedTypes this is because you put an unsupported type in the message dictionary.

The first parameter (message) is a dictionary of String to Any, but the value cannot really be any type. If you read the documentation, it says that message is

A dictionary of property list values that you want to send. You define the contents of the dictionary that your counterpart supports. This parameter must not be nil.

“property list values” means values that can be stored in a Plist. This means you can use simple types like Int, Bool, and String and you can also use arrays and dictionaries as long as they are of those simple types (e.g. an Array of Ints)

I ran into this issue because I tried to use a custom struct in the message dictionary, which is not supported.

Note: I made this post because google is sending people to Programming Tutorials Need to Pick a Type of Learner because it mentions WCErrorCodePayloadUnsupportedTypes incidentally, but isn’t really about that.