My life used to be easy. I was an iOS developer from 2014-2021. To do that, I just needed to know Objective-C and then Swift. Apple provided a default way to make UI and it was fine.
But, in 2021, when I went independent, I decided to abandon iOS and move to a React/Node stack for web applications (that I wanted to be SPA). I chose React/ReactNative. It was fine, but I have to move on.
The main reason is the complexity. For my application, which is simple, there are an insane amount of dependencies (which are immediate tech debt, IMO). Hello World in Typescript/Node/React will put 36,000 files (at last count) in node_modules. This reality has become a prime target for hackers who are using this as a vector for supply chain attacks. It’s clear that the node community is not prepared for this, so I have to go.
This is a major shift for me, so I am rethinking everything. This was my criteria:
Minimal dependencies
No build for JS or CSS
Security protection against dependencies
Bonus if I already know how to do it
The first, and easiest decision, was that I was going to move all development from my Mac to Linux. I’ll talk about this tomorrow in Part II.
In October, I committed to write a blog every day in November as a kind of NaNoWriMo, but geared to what I usually write. Of course, when November 1 came, I forgot.
Luckily, I am ok with backdating blog posts, so today (November 5), I wrote for November 1-4. I looked through my drafts and picked a few to finish, but it doesn’t look like I have any other drafts good enough to develop.
Here are some things that are going on with me recently (that I plan to write about).
Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.
The key points are that metrics should be:
able drive decisions.
combined to make them multi-dimensional.
based on leading indicators that will align to lagging indicators.
And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.
As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.
I recently read an online debate between Bob Martin and John Ousterhout about the best way to write (or re-write) the prime number algorithm in The Art of Computer Programming by Donald Knuth. Having read the three implementations, I think Knuth’s original is the best version for this kind of code and the two rewrites lose a lot in translation.
My personal coding style is closer to Ousterhout’s, but that’s for application code. Algorithmic code, like a prime number generator is very different. For most application code, the runtime performance will be fine for anything reasonable, and the most important thing to ensure is that the code is easy to change, because it will change a lot. Algorithmic code rarely changes, and the most likely thing you would do is a total rewrite to a better algorithm.
I have had to maintain a giant codebase of algorithmic code. In the mid to late 2000’s, I worked at Atalasoft, which was a provider of .NET SDKs for Photo and Document Imaging. We had a lot of image processing algorithms written in C/C++.
In the six or so years I was there, this code rarely changed. It was extensively tested with a large database of images to make sure it didn’t when we updated dependencies or the compiler. The main two reasons why we would change this code was to (a) fix an edge case or (b) improve performance.
The most important thing that this code could have that would help is a lot of documentation. It was very unlikely that the coder would know what this code was doing. It probably would have been years since its last change, and unless it was our CTO making the change, there is no way anyone could understand it quickly just from reading the code. We needed this code to run as fast as possible, and so it probably used C performance optimization tricks that obfuscated the code.
Both Ousterhout and Martin rewrote the code in ways that would probably make it slower at the extremes, which is not what you want to do with algorithmic code. Martin’s penchant for decomposition is especially not useful here.
Worse than that, they both made the code much harder to understand by removing most of the documentation. I think they both admitted that they didn’t totally understand the algorithm, so I’m not sure why they think reducing documentation would be a good idea.
If my claims for the advantages of literate programming have any merit, you should be able to understand the following description more easily than you could have understood the same program when presented in a more conventional way.
In general, I would not like to have to maintain code with Knuth’s style if the code needed to be changed a lot. But, for algorithmic code, like in his books or in an image processing library, it’s perfect.
I understand why tech debt is often conflated with “taking a shortcut”, but in my experience, the hardest tech debt problems are caused by doing everything right and having wild success.
In 2006, I joined Atalasoft, which, at that point, was the leading provider of .NET Photo and Document Image SDKs. That success was driven by two early architectural decisions.
The first was to embrace .NET managed code interfaces and to not try to wrap Windows COM DLLs. This meant that every product was packaged as a .NET Assembly with no external dependencies; legacy, C-based image libraries were statically linked, so that meant we had to code the wrappers in Managed C++. This turned out to be the optimal choice in 2002 (as opposed to pure .NET or wrapped COM).
Another (correct) choice was to assume that our web-based code was embedded in an ASP.NET web application, which allowed us to have advanced viewing capabilities that worked seamlessly inside of them.
The company was successful, and that success led to tech debt related issues, which we could mostly handle, but when we were acquired, our core assumptions were invalidated.
In 2011, the company (and our codebase) was over 10 years old, and we had already started exploring how to move from our initial market to support pure managed code and non-.NET web apps. Then, Kofax acquired us and gave us 2.5 years to create a cross-platform port that ran on Java VMs under Linux, well beyond what our architecture currently supported, but in line with our direction and what we had started to do.
When you’re just starting out, shortcuts are not the biggest problem—over-engineering is. You have no users and your goal is to get some, and the best way to do that is to deploy something and get feedback. It’s fine if your solutions don’t scale because scale isn’t part of the spec.
In our case, if we had started with making generic web components that worked in Java and .NET, it would have taken a lot longer and been worse for .NET developers. We might not have been able to convince them to try us. Instead, by going fast, Atalasoft was first to market with .NET imaging components and gained a lot of early mindshare. By the time the incumbent Microsoft component makers ported to .NET, we were already established as .NET-native (and they were wrapped COM components).
But, more instructive was that some new entrants tried a pure .NET approach (rather than our hybrid approach). That architecture (to be clear) is “correct”, but didn’t work at all in 2002. They mostly did not survive and we went there in about 2009.
From what I have seen, pure vibe coding isn’t good enough to produce production software that is deployed to the public web. This is hard enough for humans. Even though nearly every major security or outage was caused by people, it’s clear that that’s just because we haven’t been deploying purely vibe coded programs at scale.
But, it’s undeniable that vibe coding is useful, and that it would be great if we could take it all of the way to launch. Until then, it’s up to the non-programming vibe coder to level up and close the gap. Luckily, the same tools they use to make programs can also be used to make them into programmers.
Here’s what I suggest: Try asking for very small updates and then reading just that difference. In Replit, you would go to the git tab and click the last commit to see what changed. Then, read what the agent actually said about what it did. See if you can make a very related change yourself. For example, getting spacing exactly right or experimenting with different colors by updating the code yourself.
Do this to get comfortable reading the diffs and to eventually be able to read the code. The next step would be being able to notice that code is wrong, which is most of what I do these days.
In the Useful Authors group, I learned to test my book’s content by teaching it first. To some extent, I did that on this blog. But that doesn’t work as well as doing it in a way where you can get a reaction. This lets you figure out if what you plan to write will resonate with your audience.
With Swimming in Tech Debt, my main way of teaching the book was to talk about tech debt with my clients and my developer friends. I would also make LinkedIn posts with excerpts from what I was writing. So much of the book, though, is based on past conversations I had had for decades at work. Some of those conversations were meant as “teaching” via training, feedback, or mentorship. A lot of it was just figuring it out as a group.
I also shared chapters online in places where it was easy to give feedback (like helpthisbook.com). Some readers have invited me to speak to groups inside of their companies. Part 4 of the book (for CTOs) started as a presentation. I was also asked for an excerpt by the Pragmatic Engineer. His audience’s reaction in comments and emails helped shape the book. It let me know which parts were the most useful and worth expanding on.
One thing I didn’t do early enough was to turn my pre-book notes into conference pitches. I finally did do that after the first draft was done, and next week, I’ll be sharing that with QA professionals at STARWEST.
In all of these cases, you are the proxy for your book before you write it. You just tell people the things you plan to write. You are hoping that it leads to a conversation where you learn if your ideas are worth writing about.
I am sharing this in the spirit of posts like this, this, and this that give insight into what it’s like to have a popular Show HN post. Like those posts, I have stats, but I didn’t make my post very strategically, so I don’t have much advice about that. What happened to me was more accidental (as I will describe), but I have some ideas on what worked.
Timeline
On September 6th, I woke up to find that I had pre-sold 50 copies of Swimming in Tech Debt overnight. For context, at this point I had sold about 125 copies, mostly to people on my mailing list, my personal network, and through LinkedIn. That had started on August 16th with most of the sales in the first week. My daily sales were in the single digits, so the 50 was a big surprise.
My first instinct was that there was some kind of problem. But, I did a quick search and saw the Hacker News post, so I clicked it.
Even though I could see that it had a lot of upvotes and discussion, it was surprising to me because I had posted the “Show HN” four days prior. It had gotten a few votes, no comments, and had scrolled off the bottom of the Show HN front page. I had forgotten about it.
I noticed two things about this post immediately: (1) it had a new date (2) the “Show HN” had been removed from the post title. The post was still attributed to me, but I had not reposted it. I don’t know how this happened, but my post history shows two posts. The newer one eventually had the “Show HN” put back in, but not by me.
I went into the discussion and saw some good and bad feedback on the book (and also for HelpThisBook.com (HTB)—the site I was using to host the book). To be honest, my initial reaction was defensiveness on the bad feedback. But, I replied to everything in the way I would want addressed: answering questions, thanking people, and explaining my point-of-view to those with criticism.
Stats
I am not privy to all of the statistics because I don’t run HelpThisBook (HTB), which I get access to as benefit of being in the Useful Authors Group started by Rob Fitzpatrick, author of Write Useful Books. (Note: our group is running a free 6-week writing sprint starting on September 18th. Hope to see you there).
Here’s what I can see in the data I have access to:
There have been 23,000 clicks to this version of the book. I don’t have referral information, but the vast majority have to be from HN (and the various HN mirrors).
On HTB, I can see readers progressing through the book. A few people finish every day (maybe they buy, I don’t know), and several more are finding it and starting to read each day. They can highlight and give feedback, which they are doing. I used this feature a lot while developing the book (at a much smaller scale) to help make sure the book was engaging readers.
There is a huge drop-off at the first chapter. Perhaps this is due to the HTB UX (it was somewhat criticized in the HN comments). It is also undoubtedly because of the content itself (and is normal, IMO).
On the Amazon KDP site, I can see that in the first day, there were over 100 books sold, and as of now, the total since that day is almost 300, with the daily sales being more like 10-20.
My personal site statistics had a bump compared to the four weeks prior. So far, that has been sustained (but I am also sending more email).
My mailing list subscribers increased too (the tall bar is 24 new subscribers). I am sending excerpts from the next part each day, which is causing some unsubscribes, but if they don’t like the e-mail, then they definitely won’t like the book. I want to make sure that they have every chance of getting the book at $0.99 if they want it.
These are modest, but they are very meaningful to me.
What Makes a Good Show HN Post
In my experience in reading Show HN, the most important thing is having something worth showing. I hope that that’s the main reason this post did well. But, I can’t deny that something happened (either a glitch or moderator changes) that boosted this post’s chances.
I also think that early comments (good and bad) also helped it get traction. When I first went to the post, the top comment was a very funny response about writing and tech debt. There were a few very negative posts, which I engaged with respectfully. Since I had already gotten 50 sales, I knew that the book had at least resonated with some. Tech debt is a topic that people have strong feelings about—I think that drove early comments.
You can’t control any of that, but what you can do is to be ready when it does. Having something for people to do (sign up for a newsletter or buy a book) helps you make something out of the traffic than just hits to your blog. Although HTB was a great choice for gathering feedback from beta readers, if I were posting finished work, I might choose a simpler option where I would have more control over the experience and access to the stats.
What’s Next
I just made the final version of the EPUB for Amazon and set the release date to September 16th. My plan is to leave it a $0.99 for a few days as a kind of soft launch. I don’t want to raise the price until it has reviews.
Then, I will work on the print book. I hope it will be done in October. If you want to be notified of when it is ready, the best way is to sign up to my mailing list. You will also get immediate access to some of the content from Part 3 (HTB only has Parts 1 and 2).
In my book, Swimming in Tech Debt, I write that I don’t think we (engineers) should be explaining tech debt to our non-engineering peers. But that only applies to our tech debt (because it’s boring). Now that they are vibe coding, I do want them to understand their own.
I talk to a lot of vibe coders who are running into the problems caused by tech debt in their projects. They don’t and can’t read code, so my definition of tech debt is hard to convey to them. But, I’ve come up with an analogy that I think works.
Imagine that I “vibe design” a concert poster. I go to DALL-E, give it a prompt and it generates an image for me. I look at it and think it’s 80% of the way there, but I want to make changes. So, I prompt again with more details and it gets closer. I try again, and again, and again, but as I go on, I start to see that some of the things that were right in early versions are gone now. I think to myself, maybe I should take the best version and try to fix it myself in a design tool.
But, then I run into a problem. DALL-E generated pixels, not a design file. It doesn’t have layers. It’s not even using fonts and text components. I just want to rotate a background shape a few degrees and fix a typo, but that’s not possible. Or what if instead of an InDesign file, it can only generate PageMaker files. They are organized perfectly, but in an older technology that I can’t use.
Changes that should be easy are hard (or impossible). Choices that were sane don’t make sense today. All of those aspects of this digital file that are hard to change are very similar to what coders experience with tech debt. It only matters if you want to make changes. It’s the resistance you feel when you try.
The irony is that the same things that made it hard for us is making it hard for the AI too. I can’t tell it to rotate a red triangle in the background because there is not triangle there, just a bunch of pixels. It can’t fix the typo because there aren’t any letters. If it had generated a sane representation, we wouldn’t need to look at because it might have been able to change it for us.
I’ve been writing on this blog for over 20 years. I’ve also released some open-source and a few apps. You have probably never heard of them.
But, when I decided to write a book in January 2024, I joined the Useful Books community, which stresses doing marketing and product design (on your book) up front. It’s paid off.
I opened Swimming in Tech Debt for pre-sales a week ago. On Monday, I woke up to being #1 in my category on Amazon.
In retrospect, these were the most important marketing moves I did:
Pick an audience (tech team leads) and then pick a conversation about a problem that they regularly have (tech debt) and write the book that would be your solution to that problem (what you would say in that conversation). The goal is to be recommended by your readers when the topic comes up.
Write in public and share it. I started in January 2024 and shared what I had in February and March. If I had not done that, the book would be 50 pages and finished in June 2024. It wouldn’t be as good and no one would have heard of it (see my previous projects).
Increase the surface area of luck. I posted my chapters in all of my communities to get feedback. Gergely Orosz happened to see it and asked me to pitch for his newsletter that reaches more than one million readers (many in my target audience).
Build an e-mail list. I used Kit (formerly ConvertKit). That list is the reason I reached #1 in my category today. They have been reading chapters and giving feedback all along, so I am very encouraged that they bought the book (because they know it best).