In the Useful Authors group, I learned to test my book’s content by teaching it first. To some extent, I did that on this blog. But that doesn’t work as well as doing it in a way where you can get a reaction. This lets you figure out if what you plan to write will resonate with your audience.
With Swimming in Tech Debt, my main way of teaching the book was to talk about tech debt with my clients and my developer friends. I would also make LinkedIn posts with excerpts from what I was writing. So much of the book, though, is based on past conversations I had had for decades at work. Some of those conversations were meant as “teaching” via training, feedback, or mentorship. A lot of it was just figuring it out as a group.
I also shared chapters online in places where it was easy to give feedback (like helpthisbook.com). Some readers have invited me to speak to groups inside of their companies. Part 4 of the book (for CTOs) started as a presentation. I was also asked for an excerpt by the Pragmatic Engineer. His audience’s reaction in comments and emails helped shape the book. It let me know which parts were the most useful and worth expanding on.
One thing I didn’t do early enough was to turn my pre-book notes into conference pitches. I finally did do that after the first draft was done, and next week, I’ll be sharing that with QA professionals at STARWEST.
In all of these cases, you are the proxy for your book before you write it. You just tell people the things you plan to write. You are hoping that it leads to a conversation where you learn if your ideas are worth writing about.
I am sharing this in the spirit of posts like this, this, and this that give insight into what it’s like to have a popular Show HN post. Like those posts, I have stats, but I didn’t make my post very strategically, so I don’t have much advice about that. What happened to me was more accidental (as I will describe), but I have some ideas on what worked.
Timeline
On September 6th, I woke up to find that I had pre-sold 50 copies of Swimming in Tech Debt overnight. For context, at this point I had sold about 125 copies, mostly to people on my mailing list, my personal network, and through LinkedIn. That had started on August 16th with most of the sales in the first week. My daily sales were in the single digits, so the 50 was a big surprise.
My first instinct was that there was some kind of problem. But, I did a quick search and saw the Hacker News post, so I clicked it.
Even though I could see that it had a lot of upvotes and discussion, it was surprising to me because I had posted the “Show HN” four days prior. It had gotten a few votes, no comments, and had scrolled off the bottom of the Show HN front page. I had forgotten about it.
I noticed two things about this post immediately: (1) it had a new date (2) the “Show HN” had been removed from the post title. The post was still attributed to me, but I had not reposted it. I don’t know how this happened, but my post history shows two posts. The newer one eventually had the “Show HN” put back in, but not by me.
I went into the discussion and saw some good and bad feedback on the book (and also for HelpThisBook.com (HTB)—the site I was using to host the book). To be honest, my initial reaction was defensiveness on the bad feedback. But, I replied to everything in the way I would want addressed: answering questions, thanking people, and explaining my point-of-view to those with criticism.
Stats
I am not privy to all of the statistics because I don’t run HelpThisBook (HTB), which I get access to as benefit of being in the Useful Authors Group started by Rob Fitzpatrick, author of Write Useful Books. (Note: our group is running a free 6-week writing sprint starting on September 18th. Hope to see you there).
Here’s what I can see in the data I have access to:
There have been 23,000 clicks to this version of the book. I don’t have referral information, but the vast majority have to be from HN (and the various HN mirrors).
On HTB, I can see readers progressing through the book. A few people finish every day (maybe they buy, I don’t know), and several more are finding it and starting to read each day. They can highlight and give feedback, which they are doing. I used this feature a lot while developing the book (at a much smaller scale) to help make sure the book was engaging readers.
There is a huge drop-off at the first chapter. Perhaps this is due to the HTB UX (it was somewhat criticized in the HN comments). It is also undoubtedly because of the content itself (and is normal, IMO).
On the Amazon KDP site, I can see that in the first day, there were over 100 books sold, and as of now, the total since that day is almost 300, with the daily sales being more like 10-20.
My personal site statistics had a bump compared to the four weeks prior. So far, that has been sustained (but I am also sending more email).
My mailing list subscribers increased too (the tall bar is 24 new subscribers). I am sending excerpts from the next part each day, which is causing some unsubscribes, but if they don’t like the e-mail, then they definitely won’t like the book. I want to make sure that they have every chance of getting the book at $0.99 if they want it.
These are modest, but they are very meaningful to me.
What Makes a Good Show HN Post
In my experience in reading Show HN, the most important thing is having something worth showing. I hope that that’s the main reason this post did well. But, I can’t deny that something happened (either a glitch or moderator changes) that boosted this post’s chances.
I also think that early comments (good and bad) also helped it get traction. When I first went to the post, the top comment was a very funny response about writing and tech debt. There were a few very negative posts, which I engaged with respectfully. Since I had already gotten 50 sales, I knew that the book had at least resonated with some. Tech debt is a topic that people have strong feelings about—I think that drove early comments.
You can’t control any of that, but what you can do is to be ready when it does. Having something for people to do (sign up for a newsletter or buy a book) helps you make something out of the traffic than just hits to your blog. Although HTB was a great choice for gathering feedback from beta readers, if I were posting finished work, I might choose a simpler option where I would have more control over the experience and access to the stats.
What’s Next
I just made the final version of the EPUB for Amazon and set the release date to September 16th. My plan is to leave it a $0.99 for a few days as a kind of soft launch. I don’t want to raise the price until it has reviews.
Then, I will work on the print book. I hope it will be done in October. If you want to be notified of when it is ready, the best way is to sign up to my mailing list. You will also get immediate access to some of the content from Part 3 (HTB only has Parts 1 and 2).
In my book, Swimming in Tech Debt, I write that I don’t think we (engineers) should be explaining tech debt to our non-engineering peers. But that only applies to our tech debt (because it’s boring). Now that they are vibe coding, I do want them to understand their own.
I talk to a lot of vibe coders who are running into the problems caused by tech debt in their projects. They don’t and can’t read code, so my definition of tech debt is hard to convey to them. But, I’ve come up with an analogy that I think works.
Imagine that I “vibe design” a concert poster. I go to DALL-E, give it a prompt and it generates an image for me. I look at it and think it’s 80% of the way there, but I want to make changes. So, I prompt again with more details and it gets closer. I try again, and again, and again, but as I go on, I start to see that some of the things that were right in early versions are gone now. I think to myself, maybe I should take the best version and try to fix it myself in a design tool.
But, then I run into a problem. DALL-E generated pixels, not a design file. It doesn’t have layers. It’s not even using fonts and text components. I just want to rotate a background shape a few degrees and fix a typo, but that’s not possible. Or what if instead of an InDesign file, it can only generate PageMaker files. They are organized perfectly, but in an older technology that I can’t use.
Changes that should be easy are hard (or impossible). Choices that were sane don’t make sense today. All of those aspects of this digital file that are hard to change are very similar to what coders experience with tech debt. It only matters if you want to make changes. It’s the resistance you feel when you try.
The irony is that the same things that made it hard for us is making it hard for the AI too. I can’t tell it to rotate a red triangle in the background because there is not triangle there, just a bunch of pixels. It can’t fix the typo because there aren’t any letters. If it had generated a sane representation, we wouldn’t need to look at because it might have been able to change it for us.
I’ve been writing on this blog for over 20 years. I’ve also released some open-source and a few apps. You have probably never heard of them.
But, when I decided to write a book in January 2024, I joined the Useful Books community, which stresses doing marketing and product design (on your book) up front. It’s paid off.
I opened Swimming in Tech Debt for pre-sales a week ago. On Monday, I woke up to being #1 in my category on Amazon.
In retrospect, these were the most important marketing moves I did:
Pick an audience (tech team leads) and then pick a conversation about a problem that they regularly have (tech debt) and write the book that would be your solution to that problem (what you would say in that conversation). The goal is to be recommended by your readers when the topic comes up.
Write in public and share it. I started in January 2024 and shared what I had in February and March. If I had not done that, the book would be 50 pages and finished in June 2024. It wouldn’t be as good and no one would have heard of it (see my previous projects).
Increase the surface area of luck. I posted my chapters in all of my communities to get feedback. Gergely Orosz happened to see it and asked me to pitch for his newsletter that reaches more than one million readers (many in my target audience).
Build an e-mail list. I used Kit (formerly ConvertKit). That list is the reason I reached #1 in my category today. They have been reading chapters and giving feedback all along, so I am very encouraged that they bought the book (because they know it best).
In Tech Debt Detectors and Use Your First Commit to Fix CRAP I explained the concept of combining low code coverage with high code complexity to highlight functions that are risky to change. I mostly do this in my IDE to warn me before I change code, but it’s also useful to use it to do a global search for risky functions.
My main project is in typescript and uses jest and eslint. Here’s how I automated a search for risky functions.
Step 1: Get a list of high complexity functions
Note: by complexity, I am referring to the number of independent paths through a function, which is calculated by counting branches, loops, and boolean expressions.
Eslint has rule that allows you to set a maximum complexity. In my package.json, I call eslint via yarn this way:
"lint": "eslint \"**/*.{ts,tsx}\""
I added a line that does this but with a complexity rule
What I need now is a way to find the coverage of a function given its name. If I run jest via yarn like this:
yarn test --coverage
it will generate a json file called coverage/coverage-final.json which has all of the coverage data. It’s a complex JSON file, but if you install jq via brew, you can use this script to see if it has coverage lower than 80% (credit: ChatGPT)
Onboard a new developer: To fix, they need to refactor and unit-test the functions. They can likely do this without knowing much about your system. This allows them to concentrate on learning your PR processes.
Identify risky estimates: Anyone creating an estimate of a project that will change code should see if the files and functions they intend to change are risky.
Plan tech debt remediation projects: In my book, Swimming in Tech Debt, I outline a process for building and managing a tech debt backlog. You could use a list like this to build backlog items to tackle.
Build a dashboard: It would be nice to show the rest of the org that the number of risky functions you have is decreasing over time.
My swimming workouts are in a pool, so each lap starts with me pushing off the pool wall, kicking underwater for a bit, and then turning that momentum into a freestyle swim until I get to the opposite wall and start again. The speed of my lap is determined by the efficiency of my strokes, but the push and kicks overcome the water resistance and generate the initial momentum. That push-off is analogous to how I incorporate tech debt payments into my work and is the core idea in my book, Swimming in Tech Debt.
In a single lap, most of the distance is covered by swimming, and that’s the same in my programming. Most of what I do will be directly implementing the feature or fixing the bug, but I start with a small tech debt payment to get momentum. That small payment is improving the area I am about to change, which makes it easier and faster to do that change.
After the push comes underwater kicking, which is so effective that its use is limited to 15 meters in competitions. After that, the swimmer must begin normal strokes. The same principle applies to tech debt payments. They are effective, but they are not the goal. If all you do is pay down debt, you won’t deliver anything of real value. Paying tech debt makes me happy, so I have to limit how much time I spend on it and get back to my task.
Finally, while I am swimming, no matter how tired I am or how slow I am going, I know I’ll get to the other side eventually. When I do, I get to push and kick again to get some extra momentum. Similarly, when I am stuck on a coding task, I sometimes switch to an easy and productive task (like adding a test) while my brain works on the problem in the background. I know I will do this if I have to, so I keep coding on the main problem for as long as I can. I finish my lap.
Then, I push and kick to start a new lap. That cadence of pushes, kicks, and then a nearly full lap of coding is how I finish the task at hand but leave a series of tech debt payments in my wake.
My onboarding peer-mentor at Trello described a good pull request as telling a story. In practice this meant that you would edit and order the commits after you were done so that the reviewer could go commit-by-commit and understand the change you made in steps.
So, for example, let’s say you were working on a feature. In your second commit, you introduce a bug, but then in your fifth commit, you find and fix that bug. Instead of just PR’ing that, you would edit the commits so that the bug was never introduced.
This is analogous to sending a document with superfluous text deleted, not crossed out. If you don’t edit the commits, you will waste the reviewers time because they might see the error in the second commit, make a comment, and then have to go back and amend their comment in the fifth. If you did this a lot, they might not even finish reviews before rejecting them (which is what I suggest you do to PRs with obvious problems).
I like the story frame, but I have started to think of a PR as more of an argument of its own correctness. I am trying to teach the reviewer the details of the problem and convince them through evidence that the new code is correct.
In a PR, I might start by introducing a unit test into the area you intend to change. Then, to make things clearer, I might commit a small refactoring (that isolates the change). It’s now possible to add more tests and possibly a failing one that shows what my intended fix will address. My small code clean-up commits are in service of that argument. Without them, it’s hard to tell if my fix won’t break something else. With them, the fix feels like a natural and inevitable conclusion.
Like a philosophical argument, I will anticipate and address the cases the reviewer might think of. But it’s not enough to handle a case in the code, your whole PR needs to make it clear that you anticipated and addressed it (with tests, comments, screenshots or any other evidence you can think of).
But the most important reviewer to convince is myself, of course, and doing the work to write the argument gives me confidence that my code is correct.
Code Coverage by itself is a hard metric to use because it can be gamed, and so it will suffer more from Goodhart’s Law, which is summarized as “When a measure becomes a target, it ceases to be a good measure.” Goodhart’s Law observes that if you put pressure on people to hit a target, they will, but maybe not in the way you wanted.
And this would happen with code coverage because we can always increase coverage with either useless tests, tests of trivial functions, or tests of less valuable code.
I use these metrics in combination with coverage to make it harder to game:
Code Complexity: The simplest way to do this is to count the branches in a function. I use extensions in my code editor to help bring complex code to my attention. If coverage of the function is also low, I know that I can make the code less risky to change if I test it (or refactor it).
Usage analytics: If you tag your user analytics with the folder that the code generating it is in, you can later build reports that you can tie back to your coverage reports. See Use Heatmaps for iOS Beta Test Coverage. In that post, I used it to direct manual testing, but it would work for code coverage as well.
Recency of the code: To make sure that my PRs have high coverage, I use diff_cover. This makes it more likely that my tests are finding bugs in code that is going to be QA’d soon and has already been deemed valuable to write. Very old code is more likely to be working fine, so adding tests to it might not be worth it. If you find a bug in old code worth fixing, it will generate a PR (and become recent code).
Generally, the way to make a metric harder to game is to combine it with a metric that would be worse if it was gamed in ways you can predict (or have seen).
A couple of years ago, I wrote about a testing technique that I had learned, but didn’t remember the name of, so I called it code perturbance. I mentioned this technique in my book, and a helpful beta reader let me know the real name: mutation testing.
The idea is to intentionally change the code in a way that introduces a bug, but is still syntactically correct (so you can run it). Then, you run your test suite to see if it can find the problem. This augments code coverage, which will only let you know that code was run in a test, not if an assertion was made against it.
Now that I know the name, I can find out more about it in google. For example, there are tools that can do it for you automatically. The one that I’m most interested in is Stryker-mutator, because it supports TypeScript. I’ll report back when I try it.
We use cookies for some of the features of this website. You must accept cookies to continue to use this website.