Category Archives: Tech Debt

Moats and Fast Follow For Vibe Coded Projects

I wrote about how, at Atalasoft, I told my engineers to Be Happy When It’s Hard. Be Worried When It’s Easy. We competed against open-source and in-house solutions. When we found valuable problems that were hard to solve, I was relieved. The same is true for vibe coded solutions.

If you can create a valuable app in two weeks, then so could a competitor. If your secret sauce is your idea, then that’s hard to protect if you want people to use your app. We don’t even know if AI generated code is copyrightable, so it’s very unlikely to be patentable (i.e. inventors must be humans).

Here are three things you could do:

  1. Keep building on the idea – right now, someone following you has the benefit of seeing your solution and feeding that to the AI. So, it helps if you can keep building on the idea and hope they can’t keep up. If you do the minimum, the bar is too low.
  2. Build on secret data – once you have a working system, the biggest moat you have is the data inside the system. AI can’t see that or reproduce it from scratch. Build new (valuable) features that require secret data to work. This doesn’t need to be used as training data. This is like a network effect, but more direct and long-lasting.
  3. Use your unique advantages – If your app is a simple UI on CRUD operations, then it can be reproduced by anyone. But, let’s say, you have personal branding in a space. Can you make an app that extends on it? Do you have access to hard-to-win customers? A mailing list, subscribers, etc? Fast-followers might be able to recreate your software but your audience won’t care if they trust only you.

Of these, I am relying mostly on the last one. The software I am working on is an extension of Swimming in Tech Debt. It takes the spreadsheet that I share in Part 3 and builds on it with better visualizations than the built-in ones. Someone could clone this, I guess, but probably they would need to reference my book in order to explain it. I am indifferent to whose software they use if this is true.

It’s not Debt if You Don’t Care About the User

I recently read Are consumers just tech debt to Microsoft? by Birchtree, where they say:

Microsoft just does not feel like a consumer tech company at all anymore. Yes, they have always been much more corporate than the likes of Apple or Google, but it really shows in the last few years as they seem to only have energy for AI and web services. If you are not a customer who is a major business or a developer creating the next AI-powered app, Microsoft does not seem to care about you.

Their thesis is that Microsoft’s share of the consumer market will plummet because the consumer is tech debt to them. I think of the user as a facet of tech debt, not the debt itself.

In Swimming in Tech Debt, I present eight questions you should answer about tech debt. One of them, called “Regressions”, asks how likely it will be that you will break working code for users that you care about. The more you might, the more, I believe, that you should not touch this code (or be very careful with it).

But, if you don’t care about the users, or they don’t care about the features the indebted code provides, then it’s likely that you can just rewrite it with impunity. You can change it without risk. You might be able to delete it. If so, it’s hardly a debt.

If you do value a market and change code radically, the consequences can be fatal (see Sonos). But if you don’t, then doing the minimum is rational.

Workshop: Eight Questions to Ask About Your Tech Debt

In Part 3 of my book, Swimming in Tech Debt, I write about how teams should plan projects to address larger technical debt issues. The bulk of the chapters in the section explain how to manage a tech debt backlog.

Drawing on the practices of product managers and how they manage feature backlogs, I propose a scoring system to drive the discussion.

The scoring system breaks down the costs and benefits of paying debt (or not paying it) and gives you a way to compare items to each other. It starts with this diagram:

Diagram showing Pay and Stay Forces

The benefits of paying debt and the costs of not paying (staying) drive the Pay Force. Inversely, there are benefits to staying and costs to paying that indicate you should leave the debt alone. These eight dimensions are scored by answering a related question:

  1. Visibility: If this debt were paid, how visible would it be outside of engineering? 
  2. Misalignment: If this debt were paid, how much more would our code match our engineering values?
  3. Size: If we knew exactly what to do and there were no coding unknowns at all, how long would the tech debt fix take?
  4. Difficulty: What is the risk that work on the debt takes longer than represented in the Size score because we won’t know how to do it?
  5. Volatility: How likely is the code to need changes in the near future because of new planned features or high-priority bugs?
  6. Resistance: How hard is it to change this code if we don’t pay the debt?
  7. Regression: How bad would it be if we introduced new bugs in this code when we try to fix its tech debt?
  8. Uncertainty: How sure are we that our tech debt fix will deliver the developer productivity benefits we expect?

If you have bought my book and would like me to talk to your team about this process, get in touch. It would be a 45-minute presentation with 15 minutes for Q&A.

In the presentation, I score 3 backlog items from my career and then show how the scoring drives the decision making of what to do. I encourage you to record it and then go through the presentation with a couple of examples from your backlog.

This workshop is free. Write me on LinkedIn or through my contact page.

After taking the workshop, reach out if you would like me to facilitate your technical debt backlog planning sessions. The book has agendas, scoring guides, and a catalog of score-driven debt remediation ideas, but I’m happy to tailor them to your needs.

Using Fuzzy Logic for Decision Making

In the 90’s, I read a book about fuzzy logic that would feel quaint now in our LLM-backed AI world. The hype wasn’t as big, but the claims were similar. Fuzzy logic would bring human-like products because it mapped to how humans thought.

Fuzzy Logic is relatively simple. The general idea is to replace True and False from Boolean logic with a real number between 1 (absolutely true) and 0 (absolutely false). We think of these values more like a probability of certainty.

Then, we define operations that map to AND, OR, and NOT. Generally, you’d want ones that act like their Boolean versions for the absolute cases, so that if you set your values to 1 and 0, the Fuzzy logic gates would act Boolean. You often see min(x, y) for AND and max(x, y) for OR (which behave this way). The NOT operator is just: fuzzy_not(x) => 1.0 - x.

If you want to see a game built with this logic, I wrote an article on fuzzy logic for Smashing Magazine a few years ago that showed how to do this with iOS’s fuzzy logic libraries in GameplayKit.

I thought of this today because I’m building a tool to help with decision making about technical debt, and I’m skeptical about LLM’s because I’m worried about their non-determinism. I think they’ll be fine, but this problem is actually simpler.

Here’s an example. In my book I present this diagram:

Diagram showing Pay and Stay Forces

The basic idea is to score each of those items and then use those scores to make a plan (Sign up to get emails about how to score and use these forces for tech debt).

For example, one rule in my book is that if a tech debt item has high visibility (i.e. customers value it), but is low in the other forces that indicate it should be paid (i.e. low volatility, resistance, and misalignment), but has some force indicating that it should not be paid (i.e. any of the stay forces), then this might just be a regular feature request and not really tech debt. The plan should be to put it on the regular feature backlog for your PM to decide about.

A boolean logic version of this could be:

is_feature = visible && !misaligned && !volatile && !resistant && 
              (regressions || big_size || difficult || uncertain)

But if you did this, you have to pick some threshold for each value. For example, on a scale of 0-5, a visible tech debt item be one with a 4 or 5. But, that’s not exactly right because even an item scored as a 3 for visibility should be treated this way depending on the specific scores it got in the other values. You could definitely write a more complex logical expression that took this all into account, but it would hard to understand and tune.

This is where fuzzy logic (or some kind of probabilistic approach works well). Unlike LLMs though, this approach is deterministic, which allows for easier testing and tuning (not to mention, it’s free).

To do it, you replace the operators with their fuzzy equivalents and normalize the scores on a 0.0-1.0 scale. In the end, instead of is_feature, you more get a probability of whether this recommendation is appropriate. If you build up a rules engine with a lot of these, you could use the probability to sort the responses.

Fuzzy logic also allows you to play with the normalization and gates to accentuate some of the values over others (for tuning). You could do this with thresholds in the boolean version, but with fuzzy logic you end up with simpler code and smoother response curves.

Dev Stack 2025, Part VII: Sqlite

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

Since Django uses an ORM, switching between databases is relatively easy. I usually pick MySQL, but I’m going to see how far I can get with Sqlite.

The project I am working on is to make a better version of the tech debt spreadsheet that I share in my book (sign up for the email list to get a link and a guide for using it). The app is very likely to be open-source and to start out as something you host yourself. So, I think Sqlite will be fine, but if it ever gets to the point where it won’t work, then switching to MySQL or Postgres shouldn’t be that hard. My DB needs are simple and well within the Django ORM’s capabilities.

Even if I decide to host a version, I might decide on a DB per tenant model, which might be ok for Sqlite. Another possibility is that it would be something in the Jira Marketplace, and in that case, I’d have to rewrite the backend to use Jira for storage, but that wouldn’t be that bad because (given the Jira data-model) I only need to add some custom fields to an issue. Most of the app at that point would be the visualizations and an expert system.

One nice thing about Sqlite is that it’s trivial to host. It’s just a few files (with WAL mode). It’s also trivial to run unit-tests against during development. You can do it in-memory, which is what Django testing does by default. I can also run those test suites against more powerful databases to make sure everything works with them too.

One portability issue is that if I get used to running against Sqlite, I will probably not notice performance issues. Since Sqlite is just some local files, it’s incredibly fast. You can feel free to do lots of little queries to service a request and not notice any latency issues. The same style over a network, potentially to a different datacenter, won’t work as well.

But I have seen enough evidence of production SaaS products using Sqlite, that I think I can get to hundreds of teams without worrying too much. I would love to have a performance problem at that point.

In my book, I talk about how technical debt is the result of making correct decisions and then having wild success (that invalidate those choices). I don’t like calling these decisions “shortcuts” because that word is used a pejorative in this context. Instead, I argue that planning for the future might have prevented the success. If this project is successful, it’s likely that Sqlite won’t be part of it any more, but right now it’s enabling me to get to first version, and that’s good enough.

Changing my Dev Stack (2025), Part I: Simplify, Simplify

My life used to be easy. I was an iOS developer from 2014-2021. To do that, I just needed to know Objective-C and then Swift. Apple provided a default way to make UI and it was fine.

But, in 2021, when I went independent, I decided to abandon iOS and move to a React/Node stack for web applications (that I wanted to be SPA). I chose React/ReactNative. It was fine, but I have to move on.

The main reason is the complexity. For my application, which is simple, there are an insane amount of dependencies (which are immediate tech debt, IMO). Hello World in Typescript/Node/React will put 36,000 files (at last count) in node_modules. This reality has become a prime target for hackers who are using this as a vector for supply chain attacks. It’s clear that the node community is not prepared for this, so I have to go.

This is a major shift for me, so I am rethinking everything. This was my criteria:

  1. Minimal dependencies
  2. No build for JS or CSS
  3. Security protection against dependencies
  4. Bonus if I already know how to do it

The first, and easiest decision, was that I was going to move all development from my Mac to Linux. I’ll talk about this tomorrow in Part II.

Code Coverage Talk at STARWEST

Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.

The key points are that metrics should be:

  • able drive decisions.
  • combined to make them multi-dimensional.
  • based on leading indicators that will align to lagging indicators.

And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.

As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.

Tech Debt is Caused by Correct Behavior

I understand why tech debt is often conflated with “taking a shortcut”, but in my experience, the hardest tech debt problems are caused by doing everything right and having wild success.

In 2006, I joined Atalasoft, which, at that point, was the leading provider of .NET Photo and Document Image SDKs. That success was driven by two early architectural decisions.

The first was to embrace .NET managed code interfaces and to not try to wrap Windows COM DLLs. This meant that every product was packaged as a .NET Assembly with no external dependencies; legacy, C-based image libraries were statically linked, so that meant we had to code the wrappers in Managed C++. This turned out to be the optimal choice in 2002 (as opposed to pure .NET or wrapped COM).

Another (correct) choice was to assume that our web-based code was embedded in an ASP.NET web application, which allowed us to have advanced viewing capabilities that worked seamlessly inside of them.

The company was successful, and that success led to tech debt related issues, which we could mostly handle, but when we were acquired, our core assumptions were invalidated.

In 2011, the company (and our codebase) was over 10 years old, and we had already started exploring how to move from our initial market to support pure managed code and non-.NET web apps. Then, Kofax acquired us and gave us 2.5 years to create a cross-platform port that ran on Java VMs under Linux, well beyond what our architecture currently supported, but in line with our direction and what we had started to do.

When you’re just starting out, shortcuts are not the biggest problem—over-engineering is. You have no users and your goal is to get some, and the best way to do that is to deploy something and get feedback. It’s fine if your solutions don’t scale because scale isn’t part of the spec.

In our case, if we had started with making generic web components that worked in Java and .NET, it would have taken a lot longer and been worse for .NET developers. We might not have been able to convince them to try us. Instead, by going fast, Atalasoft was first to market with .NET imaging components and gained a lot of early mindshare. By the time the incumbent Microsoft component makers ported to .NET, we were already established as .NET-native (and they were wrapped COM components).

But, more instructive was that some new entrants tried a pure .NET approach (rather than our hybrid approach). That architecture (to be clear) is “correct”, but didn’t work at all in 2002. They mostly did not survive and we went there in about 2009.

Teaching Your Book Before You Write It

In the Useful Authors group, I learned to test my book’s content by teaching it first. To some extent, I did that on this blog. But that doesn’t work as well as doing it in a way where you can get a reaction. This lets you figure out if what you plan to write will resonate with your audience.

With Swimming in Tech Debt, my main way of teaching the book was to talk about tech debt with my clients and my developer friends. I would also make LinkedIn posts with excerpts from what I was writing. So much of the book, though, is based on past conversations I had had for decades at work. Some of those conversations were meant as “teaching” via training, feedback, or mentorship. A lot of it was just figuring it out as a group.

I also shared chapters online in places where it was easy to give feedback (like helpthisbook.com). Some readers have invited me to speak to groups inside of their companies. Part 4 of the book (for CTOs) started as a presentation. I was also asked for an excerpt by the Pragmatic Engineer. His audience’s reaction in comments and emails helped shape the book. It let me know which parts were the most useful and worth expanding on.

One thing I didn’t do early enough was to turn my pre-book notes into conference pitches. I finally did do that after the first draft was done, and next week, I’ll be sharing that with QA professionals at STARWEST.

In all of these cases, you are the proxy for your book before you write it. You just tell people the things you plan to write. You are hoping that it leads to a conversation where you learn if your ideas are worth writing about.

My “Show HN” Follow-Up for “Swimming in Tech Debt”

I am sharing this in the spirit of posts like this, this, and this that give insight into what it’s like to have a popular Show HN post. Like those posts, I have stats, but I didn’t make my post very strategically, so I don’t have much advice about that. What happened to me was more accidental (as I will describe), but I have some ideas on what worked.

Timeline

On September 6th, I woke up to find that I had pre-sold 50 copies of Swimming in Tech Debt overnight. For context, at this point I had sold about 125 copies, mostly to people on my mailing list, my personal network, and through LinkedIn. That had started on August 16th with most of the sales in the first week. My daily sales were in the single digits, so the 50 was a big surprise.

My first instinct was that there was some kind of problem. But, I did a quick search and saw the Hacker News post, so I clicked it.

Even though I could see that it had a lot of upvotes and discussion, it was surprising to me because I had posted the “Show HN” four days prior. It had gotten a few votes, no comments, and had scrolled off the bottom of the Show HN front page. I had forgotten about it.

I noticed two things about this post immediately: (1) it had a new date (2) the “Show HN” had been removed from the post title. The post was still attributed to me, but I had not reposted it. I don’t know how this happened, but my post history shows two posts. The newer one eventually had the “Show HN” put back in, but not by me.

I went into the discussion and saw some good and bad feedback on the book (and also for HelpThisBook.com (HTB)—the site I was using to host the book). To be honest, my initial reaction was defensiveness on the bad feedback. But, I replied to everything in the way I would want addressed: answering questions, thanking people, and explaining my point-of-view to those with criticism.

Stats

I am not privy to all of the statistics because I don’t run HelpThisBook (HTB), which I get access to as benefit of being in the Useful Authors Group started by Rob Fitzpatrick, author of Write Useful Books. (Note: our group is running a free 6-week writing sprint starting on September 18th. Hope to see you there).

Here’s what I can see in the data I have access to:

There have been 23,000 clicks to this version of the book. I don’t have referral information, but the vast majority have to be from HN (and the various HN mirrors).

On HTB, I can see readers progressing through the book. A few people finish every day (maybe they buy, I don’t know), and several more are finding it and starting to read each day. They can highlight and give feedback, which they are doing. I used this feature a lot while developing the book (at a much smaller scale) to help make sure the book was engaging readers.

There is a huge drop-off at the first chapter. Perhaps this is due to the HTB UX (it was somewhat criticized in the HN comments). It is also undoubtedly because of the content itself (and is normal, IMO).

On the Amazon KDP site, I can see that in the first day, there were over 100 books sold, and as of now, the total since that day is almost 300, with the daily sales being more like 10-20.

My personal site statistics had a bump compared to the four weeks prior. So far, that has been sustained (but I am also sending more email).

My mailing list subscribers increased too (the tall bar is 24 new subscribers). I am sending excerpts from the next part each day, which is causing some unsubscribes, but if they don’t like the e-mail, then they definitely won’t like the book. I want to make sure that they have every chance of getting the book at $0.99 if they want it.

These are modest, but they are very meaningful to me.

What Makes a Good Show HN Post

In my experience in reading Show HN, the most important thing is having something worth showing. I hope that that’s the main reason this post did well. But, I can’t deny that something happened (either a glitch or moderator changes) that boosted this post’s chances.

I also think that early comments (good and bad) also helped it get traction. When I first went to the post, the top comment was a very funny response about writing and tech debt. There were a few very negative posts, which I engaged with respectfully. Since I had already gotten 50 sales, I knew that the book had at least resonated with some. Tech debt is a topic that people have strong feelings about—I think that drove early comments.

You can’t control any of that, but what you can do is to be ready when it does. Having something for people to do (sign up for a newsletter or buy a book) helps you make something out of the traffic than just hits to your blog. Although HTB was a great choice for gathering feedback from beta readers, if I were posting finished work, I might choose a simpler option where I would have more control over the experience and access to the stats.

What’s Next

I just made the final version of the EPUB for Amazon and set the release date to September 16th. My plan is to leave it a $0.99 for a few days as a kind of soft launch. I don’t want to raise the price until it has reviews.

Then, I will work on the print book. I hope it will be done in October. If you want to be notified of when it is ready, the best way is to sign up to my mailing list. You will also get immediate access to some of the content from Part 3 (HTB only has Parts 1 and 2).