Category Archives: Software Development

Pre-define Your Response to the Dashboard

A few days ago, I wrote about using Errors Per Million (EPM) instead of success rate to get better intuition on reliability. I also recently said that Visualizations Should Generate Actions. Sometimes it’s obvious what to do, but if not, you can think through the scenarios and pre-define what actions you would take.

Here’s an example. This is a mock up of what a dashboard showing EPM over time might look like. The blue line is the EPM value on a date:

The three horizontal lines set levels of acceptability. Between Green and Yellow is excellent, between Yellow and Red is acceptable, and above Red is unacceptable. When we did this, we thought about using numbered severity levels (like in the Atlassian incident response playbook), but we decided to use Green/Yellow/Red for simplicity and intuition.

We also pre-defined the response you should have at each level. It was something like this:

LevelResponse
GreenNone
YellowThere must be at least one item in the current sprint with high priority to address this until the level is back to Green. It can be deployed when the current sprint is deployed.
RedAt least one person must be actively working to resolve the issue and doing hot fix deploys until the level is back to Yellow.

The advantage of this was that these actions were all pre-negotiated with management and product managers. This meant that we could just go ahead and fix things (at a certain level) instead of items getting lost in the backlog.

When we created this dashboard, we were in the Red, but we knew that going in. We worked to get ourselves Green and in practice, we were rarely not Green. This is another reason to pre-define your response, as it becomes too hard to remember how to handle situations that rarely happen.

When 99.8% Success Wasn’t Very Good

One of my projects at Trello was looking into and fixing a reliability problem we were having. Even though it seemed solid in testing, we were getting enough support tickets to know that it must be worse than we thought. We collected data on its success rate and found out that it was successful 99.8% of the time. To move forward with doing more work on it, I had to convince our team and management that 99.8% was bad.

At the time, there was a company-wide push at Atlassian to improve reliability, and there was a line-manager assigned to oversee it across Trello teams, so I spoke to him. I showed him the data, but also told him that there’s an overall feeling on our team that it’s affecting our customers more than it seems.

He suggested that I flip the ratio and instead look at the data as Errors Per Million. When you do that, with the same data, you get 2,000 errors per million attempts. This particular thing happened around 2 million times per day, so that was 4,000 errors per day. That partially explained the issue, because it wouldn’t take much for the support tickets to get out of control. Luckily, not all 4,000 were of the same severity, and many were being retried. Still, that number is too high.

The other thing I found after more analysis, was that the errors were not evenly spread across the user base. They tended to cluster among a smaller cohort, which was experiencing a much worse EPM than then rest of the users.

With that in hand, we greenlit a project to address this by targeting the most severe problems. We found several bugs. Eventually we got the success rate to 99.95%.

It’s not clear that 99.95% is four times better than 99.8%, but the equivalent EPM after the fixes is 500 (as opposed to 2,000 before). What surprised me is how different a reaction a high EPM got compared to equivalent numbers. 2,000 EPM is literally the same as 99.8% success and 0.2% failure rates, but the latter two seem fine. Even if I say we get 2 million attempts per day, it’s hard to intuitively understand what that means.

When I said 2,000 EPM, we instinctively felt that we failed 2,000 users. When I said we get 2 million attempts, everyone knows to double EPM to get 4,000 incidents per day, and we felt even worse. That simple change in reporting made all the difference in our perspective.

How I Use JIRA and Trello Together

I started using JIRA for issue tracking when I worked at Trello (at Atlassian), and I still use it now. JIRA does everything I need in managing software projects, but I never send people outside of my team to JIRA because it’s not easy for casual users. For that I use Trello.

I have a Trello board for each project I am managing that is meant to be a high-level summary of that project. It is useful for onboarding and getting its current status easily. It has links to JIRA, Confluence (for specifications), Atlas (for status) and Figma.

This Trello board is the first place I send a new team member to help with onboarding. If someone has a question in Slack about the project, I make sure that it was something you could find out on the board and then link them to it there. The board is a kind of dashboard and central hub of the project.

These hub boards are curated, so I don’t try to use any automations to bring things over. If I think you need more information, I send you directly to the source.

JIRA is useful to the people that work on the project every day. I use Trello for those that just check in weekly or monthly.

Visualizations Should Generate Actions

Yesterday, I shared a heatmap visualization that I used to target manual testing time. I chose a heatmap to show this data because you can tell what you need to do just by looking at it.

In this example

A heat map the test status of iOS devices across different features in an app

It’s pretty clear that you should get an iPhone 13 with iOS 15 on it and start testing everything. You could also explore board creation on all devices. If the entire heatmap were green, you would know that you had probably covered most areas.

It would be easy to write a program that took this same data and generated a to-do list instead. Maybe that would be preferable, but people like visual dashboards, and it’s easier to see the why behind the task if you have a sense of the underlying data.

But, that’s a big clue to whether your dashboard visualization works. If you could easily generate a to-do list just by looking at it, then it probably works. If you look at your dashboard and have no response, it might look pretty, but it’s not doing its job.

Use Heatmaps for iOS Beta Test Coverage

At Trello, I built a simple visualization for understanding coverage of our app during Beta periods. We used Mode to analyze data, and so I used their Heatmap.

Here’s a recreation in Google Sheets:

Along the top was each device family and OS. Individual devices were grouped based on how likely they were to be similar in testing (based on size, version, etc). I used this list of Apple device codes (which were logged with analytic data).

Along the left side were the most important screens and features. It was a much longer list that was generated from analytic categories.

The center of the visualization was a heat map based on how much usage that feature got on that device (at the cross-section) normalized against how much usage it got in production. So, if a cell was green, it meant that it was tested a lot when compared to how much it was used in production. If a cell was red, it meant it was under tested.

Often, entire vertical columns would be near red because the combination of device/OS wasn’t used much by our beta testers. So, we could direct our own efforts towards those devices and turn an entire column from red to green.

We also made sure new features would get their own row. These could also be red because beta testers might not know about them. We could similarly target those areas on all devices. These features could not be normalized against production usage (since they were not in production yet), so we use a baseline usage as a default.

Mode kept snapshots of the heatmaps over time. We could watch it go from nearly all red at the beginning of the beta period to more green by the end. I can’t say we could get the entire heatmap to be green, but we could at least make sure we were testing efficiently.

PR Authors Have a lot of Control on PR Idle Time

Getting a pull request reviewed quickly is often under the author’s control. This is great news because, according to DevEx, you should reduce feedback loops to increase developer productivity. There are other feedback loops that a developer experiences, but pull requests happen all of the time. You can have a big impact on productivity if they happen faster, and a big reason they don’t is because of the commits in the PR.

At Trello, during a hackathon, someone did an analysis on all of the PRs on all of the teams to see if they could get some insights. At the time, we probably had about 10 teams of about 7-10 developers each.

One thing they looked at was median time to approve a PR based on team, and there were two teams that were far outliers (with smaller waits for a PR to be approved). They went further to look at the PRs themselves and noticed that they generally had fewer commits and the commits themselves were smaller. The number of pull requests per developer-week was much higher than other teams.

I was on one of those teams, and the other one was very closely aligned with us (meaning we had a lot of shared processes and rituals). When we very small, 5 total developers, we were basically one team with a shared lead. The style of PR on these teams was very intentional. When I was onboarded, I was given very specific instructions on how to make a PR.

The essence of what we did was to completely rewrite the commits before making a pull-request to “tell a story”. I wrote about the details in Construct PRs to Make Reviewing Easy.

As a reviewer, this made it very easy to do approvals, which we could fit it in at almost any time. With all of us doing this, many PRs were approved within an hour and most in a few hours. A really good time to do some reviewing was right after submitting a PR, which made the throughput reach a steady-state. The PR list was rarely very long.

I worked in this style for the 6+ years I was on this team and know that it contributed to a high level of personal work satisfaction. Even though I had to wait for others to approve my work, I felt that my own productivity was largely under my control.

Related: Metrics that Resist Gaming

Just Started a New Software Engineering Job? Fix Onboarding

If you are about to start a new job as a software engineer, the way to have a big impact day one is to go through the onboarding with the intention to generate a list of improvements that you work on over time.

Here are some things to look for

  1. Incorrect or outdated information. If you find these, just fix them as you find them.
  2. Missing entry-point documentation. Even teams that have good documentation often do not have a document that is useful if you know nothing. Often they don’t go back and make a good “start here” kind of document.
  3. Manual steps that could be scripted. Don’t go overboard, but if you see some quick wins to automate the dev setup steps, it’s a good first PR. It’s a tech debt payoff that is timed perfectly.
  4. Dev Setup Automation bug fixes. If anything goes wrong while running the scripts to set up your machine, fix the bug or try to add in any detection (or better error messages) that would have helped diagnose the issue.

There is usually a lot of tech-debt in onboarding code and documents because no one really goes through them. Sometimes underlying things just change and tech debt happens to you. You are in a unique position to make it better for the next person and have some impact right away.

Invest 10% of a team (not of each dev) to pay back tech debt

If you budget 10% of your team’s time to paying down technical debt, there are a few ways you could do it.

  1. Make sure 10% of the story points of each sprint are technical debt related
  2. Assign every other Friday (10% of 2 weeks) to everyone paying down technical debt (see this article for a story about Tech Debt Friday)
  3. Assign 10% of the team to spend 100% of their time paying down technical debt (rotating who this is every quarter or so, or at project boundaries)

I’ve done some variation on all of these ways, and in my experience, #3 works the best. When I was at Trello, my team allocated more like 30%, but tech debt was lumped together with anything “engineering driven”, which was more than just debt payoff (e.g. tooling).

The main reason #3 works better is how companies typically review and reward developers. Something that is 10% of your work is never going to show up on your review. Over time this is generally a disincentive to do it. But, if you are supposed to spend 100% of your time on something, then it has to show up on your review.

Making this someone’s full time job for a quarter means that they can plan bigger projects with more impact. It’s hard to get a PR done in one day, so it takes about a month to get anything deployed at all. When you work on it full-time, you can deploy much more frequently. When I was on a tech-debt project, I would use the first week to deploy some extra monitoring that could measure impact or catch problems.

It allows devs to get into the zone, which is really helpful in giant refactoring or restructuring/rewrite slogs. If you only get one day every two weeks, you have to reacquaint yourself with anything big, which eats into that one day quickly.

It will also make it more likely that this debt paydown is localized. This makes it easier to test that it hasn’t caused regressions.

Finally, (for managers) it’s easier to measure that you are actually spending your budget correctly because you don’t have to monitor individual stories over time. You just need to track how developers were allocated over time.

Other articles about Tech Debt:

Knowing Assembly Language Helps a Little

I can’t say I recommend learning assembly, and I never really had to write much professionally, but knowing it has been helpful in giving me a mental model of what is happening inside a computer.

I started with assembly soon after I started programming in BASIC. In the eighties, all of the computer magazines listed assembly programs because that was the only way to do some things. Jim Butterfield’s Machine Language for the C64 [amazon affiliate link] was a classic.

In college, I used assembly in a few classes. In Computer Architecture we had to write a sort algorithm in VAX assembly, and in my Compilers course, we had to generate assembly from C (and then we were allowed to use an assembler to make the executable).

This was last time I wrote any significant amount of assembly, but in all of the time I worked in C, C++, Java, C#, and Objective-C, I found myself needing to read the generated assembly or bytecode on a lot of occasions. There were some bugs that I probably could have only figured out this way. Knowing how different calling conventions work in C on Windows was part of my interview at Atalasoft (and it was actually important to know that on the job).

So, if you have any interest in it, I would try it out. The main issue is that modern instruction sets are not optimized for humans to write. But, I learned 6502 assembler on a C64, and if you learn that then you can get into the wonderful world of C64 Demos.

Moore’s Law of Baseball

For almost my entire life (and before that all the way back to the dawn of baseball), the stats on the back of a baseball card were unchanged. If you got the box scores for your favorite player, you could calculate their stats yourself with a pencil. That’s not necessarily good. These stats were simple and misleading.

For example, it was clear in the 90’s that on-base percentage was more important than batting average. This got expanded on in the money ball era. Computers were brought in to analyze players, and so analyzing players was now subject to Moore’s Law, which can be simplified to say that we double computer power every 18 months. We’ve had about 20 doublings since then.

What the Moore’s Law of baseball? The number of stats is doubling every 18 months, all enabled by modern compute power.

There’s a stat called WAR or Wins Above Replacement, which tries to tell you how many wins a player adds to their team relative to the average player at their position (who has a WAR of 0). To calculate WAR for a single player you need every outcome from every player. It’s so complex, that we can’t agree on the right way to do it, so we have a dozen variants on it.

Stats like Exit Velocity, Launch Angle, Spin Rates, Pitch Tunneling, and Framing are only possible to know because of high-speed cameras and advanced vision processing enabled by Moore’s law. We’re not limited to describing what has happened already—some broadcasts put pitch-by-pitch outcome predictions on the screen.

Even with all this advancement, it still sometimes feels like we’re still at the dawn of this era. As a fan, these don’t feel like the right stats either. No one will be put in the hall of fame because they hit the ball hard a lot of times.

Just need a few more doublings, I guess.