Author Archives: Lou Franco

Technical Debt Typology Research Paper

A few months ago, I got an email from Mark Greville that included a link to a research paper he coauthored, called A Triple Bottom-line Typology of Technical Debt: Supporting Decision-Making in Cross-Functional Teams.

In the paper, the authors identify several categories of tech debt. One category is internal vs. external effects. In my book, I also identified the external category, which I call visibility. The paper thinks of the entire business as the “internal”, but I think of the team itself as the internal part. My separation is driven by difference in communication that the engineering team needs to use for itself vs. the rest of the business. Customers and other public stakeholders would likely be similar to the non-engineering business teams.

Since a lot of my book is about how tech debt affects developer productivity, I break down internal to the various ways it could reduce productivity. I use misalignment to describe tech debt that doesn’t meet the documented standards of the team. When the code is hard to change, for example if there’s messy code all over the codebase (Marbleized Code Fat), I call that resistance. If the code does what it’s supposed to (so no external effects) and the customers highly depend on its behavior, I warn about the risk of regressions.

Another pair they describe is whether the tech debt is taken knowingly or unknowingly. This is useful from a taxonomy perspective and might contribute to tech debt avoidance, but in my book, I write:

I don’t think of tech debt as the result of an intentional shortcut borrowed from the future. Some debt starts that way, but the reality is that lots of tech debt happens because the world changes. Even if your system represents your best ideas of how to solve the problem at hand, your ideas will get better, and the problems will change. You can do everything right and still have bad code, so it doesn’t help to judge the decisions that got us there. Learn from them, but it’s counter-productive to dwell on them.

My chapters on these dimensions focus on using them to decide what to do about the debt, and I don’t think intention is a factor in deciding what to do next.

The paper is worth a look and also has quite a good bibliography if you are interested in research on tech debt. Since the methodology of the research included a literature review, the list of references reviewed is another treasure trove of research.

Adam Tornhill on Tech Debt’s Multiple Dimensions

In the research for my book on technical debt, I ran into this talk by Adam Tornhill:

Adam has a similar perspective to mine: technical debt is multi-faceted, and the right strategy should address its various dimensions.

One of his examples is combining a code complexity metric with data from your source code repository to define low code health hotspots—areas where code is both complex and frequently changed. To find the hotspots, he built a tool to calculate this metric and visualize it. In the video he shows data from big open-source projects (like Android and .NET core) and pinpoints areas that would benefit from work to pay down debt.

Similarly, in my book, I identify eight dimensions of debt. Complexity is something I consider to be part of Resistance, which is how hard or risky it is to change the code. I would also incorporate low test coverage into resistance, as well as subjective criteria. Adam says that complexity is a good estimate of how many tests you need, which is true, and I give you credit for having the tests. I am mostly concerned by complex code that is undertested.

Like Adam, I believe that bad code only matters if you plan to change it. He believes that the repository history of changes is a good indication of future change, which I agree with, but to a lesser degree. In my book, I recommend that you look at the history and shared this git log one-liner as a starting point:

git log --pretty=format: --name-only --since=3.months | sort | uniq -c | sort -rg | head -10

That line will show you the most edited files. To find the most edited folders, I use this

git log --pretty=format: --name-only --since=3.months | sed -e 's/^/\//' -e 's/[^\/]*$//'  | sort|uniq -c|sort -rg|head -10

This data contributes to a dimension I call volatility. It’s meant to be forward looking, so I would mostly base it on your near future roadmap. However, it is probably true that it is correlated with the recent past. In my case, this data is misleading because I just did a reorganization of my code to pull out a shared library to share between the web and mobile versions of my app. But, knowing this, I could modify the time period or perhaps check various time periods to see if there’s some stable pattern.

Generally, my opinions about tech debt and prioritization are very aligned with what’s in this video, especially the multi-faceted approach.

Marbleized Code Fat?

I go back and forth on whether the name “Tech Debt” is the most useful term. In my book, I decided I can’t fight the term, so I use it, but in my opening paragraphs I make an argument that the problem we call tech debt isn’t like other debt.

The most common way I’ve seen tech debt is that it’s just everywhere. It’s not limited to a specific old module, it pervades the codebase like marbleized fat in a good steak, but not as delicious.

Follow-up on my Tech Debt Article in The Pragmatic Engineer Newsletter

About two weeks ago, The Pragmatic Engineer published an early excerpt from a book I’m writing on Tech Debt. This week, he published a follow-up that’s available for free. Read it here:

https://newsletter.pragmaticengineer.com/p/paying-down-tech-debt-further-learnings

If you missed the original article, it’s here: 

https://newsletter.pragmaticengineer.com/p/paying-down-tech-debt

After that article came out, there were a few LinkedIn posts with good conversations about tech debt. This one by Ellery Jose has an interesting table that breaks down how to treat technical debt as a financing decision:

​https://www.linkedin.com/posts/ejose_tpm-pmo-engineering-activity-7237750795348164610-5mKq/​

Also, Guillaume Renoult sent me this post he wrote:

​https://medium.com/elmo-software/how-to-sort-out-tech-debt-and-speed-up-development-9a1d27fdd39e​

There is a section of my book that has similar ideas to these two posts, particularly team management of debt using evidence and data. I’m hoping to have a draft done of that part in a few weeks. If you want to see an early version, sign up to receive updates.

My Guest Article on Tech Debt for the Pragmatic Engineer

I started writing a book on Tech Debt in January and I posted a few chapters in the Spring. A few months later, Gergely Orosz from the Pragmatic Engineer reached out to ask me to write a guest post for The Pragmatic Engineer newsletter—it just went up today.

https://newsletter.pragmaticengineer.com/p/paying-down-tech-debt

If you want updates about the book, sign up here: Swimming in Tech Debt

There are a lot of posts on this site about tech debt. Here are a few

LLMs Talk Too Much

The biggest tell that you are chatting with an AI and not a human is that they talk too much.

Yesterday, to start off a session with ChatGPT, I asked a yes/no question and it responded with a “Yes” and another 325 more words. No person would do this.

Another version of this same behavior is when you ask an LLM a vague question—it answers it! No questions. Just an answer. Again, people don’t usually behave this way.

It’s odd because in the online forums that were used to train these models, vague questions are often called out. It’s nice that the LLM isn’t a jerk, but asking a clarifying question is basic “intelligent” behavior and they haven’t mastered it yet.

LLMs Can’t Turn You Into a Writer

In Art & Fear, the authors write:

To all viewers but yourself, what matters is the product, the finished artwork. To you and you alone, what matters is the process, the experience of shaping that artwork. The viewers’ concerns are not your concerns. Their job is whatever it is, to be moved by art, to be entertained by it, whatever.

Your job is to learn to work on your work.

If you use LLMs to write for you, you will end up with writing. You’ll pass your class, get by at work, or get some likes on a post. But you will not become a better writer.

It might even be hard to become a better judge of writing. If you do, it won’t be by reading what LLMs bleat out. It won’t be by reading their summaries of great work. “Your job is to learn to work on your work” and to do that you need to do your own writing and limit your reading to those that do the same.

Policies that reduce dependencies

Over time I have become skeptical of most dependencies. I wrote in Third-party Dependencies are Inherently Technical Debt:

[…] any third-party dependency is technical debt. Third-party? Here’s how I am defining it for the purposes of this article. You are the first party, the platform you are writing on (e.g. iOS, Android, React, NodeJS) is the second party. Everyone else is third party.

[Third-party dependencies are debt because] you need to constantly update them, […] they introduce breaking changes, […] they become unsupported, […] your platform adds their own incompatible implementation, [… and] they don’t update on the same schedule as the [platform].

To that end, I like policies that tend to reduce the number of dependencies you have. Here are a couple that I have seen work.

  1. Become a committer on any third-party dependency you take on. To be fair, you kind of owe that to the project.
  2. Donate to any third-party dependency you take on where you won’t become a committer.
  3. Fork the dependency and bring in updates carefully.

Seems like extra work, right? The extra work is why they work.

Identity is a Powerful Habit Totem

In my post, Environment Hacking, I wrote about a problem I ran into with BJ Fogg’s Behavior Model:

One of BJ Fogg’s insights is that you already have habits that are completely automatic, so he suggests using those as prompts for a new habit you are trying to build. You repeat to yourself, “After I do [some automatic habit], I will do [some tiny version of a new habit]”. For example, “after I brush my teeth, I will floss one tooth”. In this case, the environment is your existing habit.

This works great, but I have some habits that I can’t easily tie to existing one (or at least I haven’t been successful at it yet). For these kinds of habits, I have been thinking of “habit totems” I can put into the environment to prompt me.

In that article, the idea was to place objects in the world that remind you to do things when you see them. I called those objects “Habit Totems”. A Habit Totem works because it’s always present at the place where you want to do the habit, and it prompts you to do it.

Your identity can be a powerful Habit Totem because it is always present, and it changes how you perceive the environment such that it turns everything into a prompt.

I have an identity as a programmer. I program all of the time without prompts, because everything reminds me about programming. I also have an identity as someone that eats plant-based food—no prompt needed at the grocery store or restaurant because it’s just a part of who I am.

But I struggle with my aspirational identities. I want to be a writer, but I still haven’t assumed that identity enough for it to be a driver on its own. I want to make my plant-based identity more specific by becoming “whole-food plant-based”, but I haven’t made a lot of progress.

What I am trying now is to try to internalize that I have already become the identity I aspire to be. I walked through the grocery store last weekend just repeating “I only eat whole plant-based food” and managed to leave with a cart more aligned to that identity. I am writing this blog because “I am a writer” and a writer writes. When I thought about my experience at the grocery store, I was more compelled to write about it.

It’s only been a few days, but it feels very powerful so far.

Humans Have Been Upgraded by LLMs

If, twenty years ago, you took the most sophisticated AI thinkers in the world and set up a Turing test with a modern AI chatbot, they would all think they were chatting with a human. Today, that same chatbot would have a hard time fooling most people that have played around with them.

In a sense, humans have been upgraded by the existence of LLMs. If we had no idea that they existed, we would be fooled by them. But, now that we’ve seen them, we’ve collectively learned how they come up short.