Author Archives: Lou Franco

Interleaved Reading

After graduating college, my book choices have been self-determined, and for a long time, I read them serially. This is weird because up until then I was always reading more than one book at time (by necessity).

But, a few years ago, while researching learning methods and incorporating spaced-repetition (i.e. flash cards) into my life, I saw a recommendation to interleave reading. The idea is simple: You read more than one book at a time. This might be obvious to some, but it wasn’t to me. It was liberating.

One problem of reading a book all the way through is that you don’t have time to consider a chapter before moving on. When you interleave books, you have time to ruminate on them in the background. Better than that, you can use spaced-repetition to build flash cards that you practice before moving on. Then, while you are reading, the older chapters are periodically shown to you, reinforcing the whole book as you read.

Another exercise is to write your own synthesis of the chapter, applying it to your personal interests. This kind of note can be a blog post or a page in your Digital Zettelkasten. Over time, these original thoughts might build up to something bigger. For me, it was my book.

Finally, I read a lot of books that are meant to be used, not just read. They offer their own exercises. For example, here’s a post about the way I use The Artist’s Way, a quintessentially useful book, which encourages you to read a chapter a week and then do some work.

Before I did this, retention was hard. Now, it’s effortless (a few minutes a day in Anki) or the effort is welcome (it generates a blog post). But, it does mean I put books down intentionally, and so I need a different one to read while I work on retaining the first one.

(I meant to write about what I’m currently reading today, but I thought it would be good to write about this first so I can reference it. I’ll get to my current reading list tomorrow)

My Antilibrary

I buy books as soon as I think I will read them (usually from a recommendation), but it might take time for me to get to them. I used to lament this, but then I read this take. In The Black Swan: The Impact of the Highly Improbable [affiliate], Nassim Nicholas Taleb writes:

You will accumulate more knowledge and more books as you grow older, and the growing number of unread books on the shelves will look at you menacingly. Indeed, the more you know, the larger the rows of unread books. Let us call this collection of unread books an antilibrary.

Here’s what’s menacing me now from my antilibrary and will probably read soon:

  • Write a Must-Read [affiliate] by AJ Harper: This was recommended in my writer’s accountability group as something to read before you write a book. It’s too late for that, but I have a general interest in the topic. I’ve only had this for a week.
  • Software Productivity [affiliate] by Harlan D. Mills. I found a reference to this book in a reread of Peopleware [affiliate] almost exactly a year ago. It’s out of print, but I found a cheap used copy and got it. It’s been on my desk almost since then.
  • Vibe Coding [affiliate] by Gene Kim and Steve Yegge. I got this when it went on presale a few months ago. I better read this soon, because it will age quickly.
  • The Real Play Revolution [affiliate] by Ash Perrin. Ash is a clown who travels worldwide to refugee camps to entertain children. I saw him speak at PINC in Sarasota two years ago and bought the book there (mostly to support his efforts). There’s another PINC coming in a few weeks (Dec 11-13), which I highly recommend if you are in the area.

I usually read multiple books at the same time, picking up the one I have energy for at any given time. I try to keep them different from each other, but they are usually all non-fiction. I’ll write about that tomorrow.

Moats and Fast Follow For Vibe Coded Projects

I wrote about how, at Atalasoft, I told my engineers to Be Happy When It’s Hard. Be Worried When It’s Easy. We competed against open-source and in-house solutions. When we found valuable problems that were hard to solve, I was relieved. The same is true for vibe coded solutions.

If you can create a valuable app in two weeks, then so could a competitor. If your secret sauce is your idea, then that’s hard to protect if you want people to use your app. We don’t even know if AI generated code is copyrightable, so it’s very unlikely to be patentable (i.e. inventors must be humans).

Here are three things you could do:

  1. Keep building on the idea – right now, someone following you has the benefit of seeing your solution and feeding that to the AI. So, it helps if you can keep building on the idea and hope they can’t keep up. If you do the minimum, the bar is too low.
  2. Build on secret data – once you have a working system, the biggest moat you have is the data inside the system. AI can’t see that or reproduce it from scratch. Build new (valuable) features that require secret data to work. This doesn’t need to be used as training data. This is like a network effect, but more direct and long-lasting.
  3. Use your unique advantages – If your app is a simple UI on CRUD operations, then it can be reproduced by anyone. But, let’s say, you have personal branding in a space. Can you make an app that extends on it? Do you have access to hard-to-win customers? A mailing list, subscribers, etc? Fast-followers might be able to recreate your software but your audience won’t care if they trust only you.

Of these, I am relying mostly on the last one. The software I am working on is an extension of Swimming in Tech Debt. It takes the spreadsheet that I share in Part 3 and builds on it with better visualizations than the built-in ones. Someone could clone this, I guess, but probably they would need to reference my book in order to explain it. I am indifferent to whose software they use if this is true.

It’s not Debt if You Don’t Care About the User

I recently read Are consumers just tech debt to Microsoft? by Birchtree, where they say:

Microsoft just does not feel like a consumer tech company at all anymore. Yes, they have always been much more corporate than the likes of Apple or Google, but it really shows in the last few years as they seem to only have energy for AI and web services. If you are not a customer who is a major business or a developer creating the next AI-powered app, Microsoft does not seem to care about you.

Their thesis is that Microsoft’s share of the consumer market will plummet because the consumer is tech debt to them. I think of the user as a facet of tech debt, not the debt itself.

In Swimming in Tech Debt, I present eight questions you should answer about tech debt. One of them, called “Regressions”, asks how likely it will be that you will break working code for users that you care about. The more you might, the more, I believe, that you should not touch this code (or be very careful with it).

But, if you don’t care about the users, or they don’t care about the features the indebted code provides, then it’s likely that you can just rewrite it with impunity. You can change it without risk. You might be able to delete it. If so, it’s hardly a debt.

If you do value a market and change code radically, the consequences can be fatal (see Sonos). But if you don’t, then doing the minimum is rational.

Workshop: Eight Questions to Ask About Your Tech Debt

In Part 3 of my book, Swimming in Tech Debt, I write about how teams should plan projects to address larger technical debt issues. The bulk of the chapters in the section explain how to manage a tech debt backlog.

Drawing on the practices of product managers and how they manage feature backlogs, I propose a scoring system to drive the discussion.

The scoring system breaks down the costs and benefits of paying debt (or not paying it) and gives you a way to compare items to each other. It starts with this diagram:

Diagram showing Pay and Stay Forces

The benefits of paying debt and the costs of not paying (staying) drive the Pay Force. Inversely, there are benefits to staying and costs to paying that indicate you should leave the debt alone. These eight dimensions are scored by answering a related question:

  1. Visibility: If this debt were paid, how visible would it be outside of engineering? 
  2. Misalignment: If this debt were paid, how much more would our code match our engineering values?
  3. Size: If we knew exactly what to do and there were no coding unknowns at all, how long would the tech debt fix take?
  4. Difficulty: What is the risk that work on the debt takes longer than represented in the Size score because we won’t know how to do it?
  5. Volatility: How likely is the code to need changes in the near future because of new planned features or high-priority bugs?
  6. Resistance: How hard is it to change this code if we don’t pay the debt?
  7. Regression: How bad would it be if we introduced new bugs in this code when we try to fix its tech debt?
  8. Uncertainty: How sure are we that our tech debt fix will deliver the developer productivity benefits we expect?

If you have bought my book and would like me to talk to your team about this process, get in touch. It would be a 45-minute presentation with 15 minutes for Q&A.

In the presentation, I score 3 backlog items from my career and then show how the scoring drives the decision making of what to do. I encourage you to record it and then go through the presentation with a couple of examples from your backlog.

This workshop is free. Write me on LinkedIn or through my contact page.

After taking the workshop, reach out if you would like me to facilitate your technical debt backlog planning sessions. The book has agendas, scoring guides, and a catalog of score-driven debt remediation ideas, but I’m happy to tailor them to your needs.

How I Learned Pointers in C

I learned C in my freshman year of college where we used K&R as our text book. This was 1989, so that text and our professor were my only source of information.

But, luckily, I had been programming for about six years on a PET, TRS-80, and Commodore 64. It was on that last computer that I learned 6502 Assembly. I had been experimenting with sound generation, and I needed more performance.

This was my first instance where Knowing Assembly Language Helps a Little.

When we got to pointers in the C class, the professor described it as the memory address of a variable. That’s all I needed to know. In Assembly, memory addresses are a first-class concept. I had a book called Mapping the Commodore 64 that told you what was at each ROM address. Doing pointer arithmetic is a common Assembly coding task. You can’t do anything interesting without understanding addresses.

So, I guess that I learned about C pointers at some point in my learning of 6502 Assembly. Since C maps to Assembly, by the time we got to it, it felt natural to me. If you are having trouble with the concept, I’d try to write simple Assembly programs. Try an emulator of 6502, and not, for example, something modern. Modern instructions are not designed for humans to code easily, but older ones took that into account a little more.

Using Fuzzy Logic for Decision Making

In the 90’s, I read a book about fuzzy logic that would feel quaint now in our LLM-backed AI world. The hype wasn’t as big, but the claims were similar. Fuzzy logic would bring human-like products because it mapped to how humans thought.

Fuzzy Logic is relatively simple. The general idea is to replace True and False from Boolean logic with a real number between 1 (absolutely true) and 0 (absolutely false). We think of these values more like a probability of certainty.

Then, we define operations that map to AND, OR, and NOT. Generally, you’d want ones that act like their Boolean versions for the absolute cases, so that if you set your values to 1 and 0, the Fuzzy logic gates would act Boolean. You often see min(x, y) for AND and max(x, y) for OR (which behave this way). The NOT operator is just: fuzzy_not(x) => 1.0 - x.

If you want to see a game built with this logic, I wrote an article on fuzzy logic for Smashing Magazine a few years ago that showed how to do this with iOS’s fuzzy logic libraries in GameplayKit.

I thought of this today because I’m building a tool to help with decision making about technical debt, and I’m skeptical about LLM’s because I’m worried about their non-determinism. I think they’ll be fine, but this problem is actually simpler.

Here’s an example. In my book I present this diagram:

Diagram showing Pay and Stay Forces

The basic idea is to score each of those items and then use those scores to make a plan (Sign up to get emails about how to score and use these forces for tech debt).

For example, one rule in my book is that if a tech debt item has high visibility (i.e. customers value it), but is low in the other forces that indicate it should be paid (i.e. low volatility, resistance, and misalignment), but has some force indicating that it should not be paid (i.e. any of the stay forces), then this might just be a regular feature request and not really tech debt. The plan should be to put it on the regular feature backlog for your PM to decide about.

A boolean logic version of this could be:

is_feature = visible && !misaligned && !volatile && !resistant && 
              (regressions || big_size || difficult || uncertain)

But if you did this, you have to pick some threshold for each value. For example, on a scale of 0-5, a visible tech debt item be one with a 4 or 5. But, that’s not exactly right because even an item scored as a 3 for visibility should be treated this way depending on the specific scores it got in the other values. You could definitely write a more complex logical expression that took this all into account, but it would hard to understand and tune.

This is where fuzzy logic (or some kind of probabilistic approach works well). Unlike LLMs though, this approach is deterministic, which allows for easier testing and tuning (not to mention, it’s free).

To do it, you replace the operators with their fuzzy equivalents and normalize the scores on a 0.0-1.0 scale. In the end, instead of is_feature, you more get a probability of whether this recommendation is appropriate. If you build up a rules engine with a lot of these, you could use the probability to sort the responses.

Fuzzy logic also allows you to play with the normalization and gates to accentuate some of the values over others (for tuning). You could do this with thresholds in the boolean version, but with fuzzy logic you end up with simpler code and smoother response curves.

Moving from React to HTMX

I have been building web applications in React until very recently, but now I’ve moved to HTMX. Here are the main differences so far. For reference, my React style was client-side only for web-applications.

  1. Client State: In React, I used Redux, which had almost all of the client state. Some state that I needed to be sticky was put into cookies. There was some transient state in useState variables in components. Redux was essentially a cache of the server database. In HTMX, the client state is in the HTML mostly in a style called Hypermedia As the Engine of Application State (HATEOAS). Right now, I do cheat a little, sending a small JavaScript object in a script tag for my d3 visualizations to use.
  2. Wire Protocol: To feed the React applications, I had been using a GraphQL API. In HTMX, we expect HTML from REST responses and even over the web-socket.
  3. DOM Updates: In React, I used a classic one-way cycle. Some event in React would trigger an API call and optimistic update to Redux. The Redux would trigger React Component re-renders. If the API failed, it would undo the Redux, and then re-render the undo and any additional notifications. In HTMX, the HTML partials sent back from the REST request or websocket use the element’s id and HTMX custom attributes to swap out parts of the DOM.
  4. Markup Reuse: In React, the way to reuse snippets of markup is by building components. In HTMX, you do this with whatever features your server language and web-framework provide. I am using Django, so I use tag templates or simple template inclusions. Aesthetically, I prefer JSX over the {%%} syntax in templates, but it’s not a big deal. There are other affordances for reuse in Django/Python, but those are the two I lean on the most.
  5. Debugging: In React, I mostly relied on browser developer tools, but it required me to mentally map markup to my source. This was mostly caused by my reliance on component frameworks, like ReactNative Paper and Material. In HTMX, the source and browser page are a very close match because I am using simpler markup and Bulma to style it. It’s trivial to debug my JavaScript now because it’s all just code I wrote.
  6. UI Testing: In React, I used react-testing to test the generated DOM. In Django, I am using their testing framework to test Django views and the HTML generated by templates. Neither is doing a snapshot to make sure it “looks right”. These tests are more resilient than that and are making sure the content and local state is correct. I could use Playwright for both (to test browser rendering) and it would be very similar.

Also, in general, I should also mention that the client-server architecture of my projects is also quite different. In React, I was building a fat-client, mobile-web app that I was designing so that it could work offline. The GraphQL API was purely a data-access and update layer. All application logic was in the client-side. All UI updates were done optimistically (i.e. on the client side, assuming the API update would succeed), so in an offline version, I could queue up the server calls for later.

My Django/HTMX app could never be offline. There is essentially no logic in the client side. The JavaScript I write is for custom visualizations that I am building in d3. They are generic and are fed with data from the server. Their interactions are not application specific (e.g. tooltips or filters).

This difference has more to do with what I am building, but if I needed a future where offline was possible, I would not choose HTMX (or server rendered React).

Early Thoughts on HTMX

I found out about HTMX about a year ago from a local software development friend whose judgement I trust. His use case was that he inherited a large PHP web application with very standard request/response page-based UI, and he had a need to add some Single Page Application (SPA) style interactions to it. He was also limited (by the user base) to not change the application drastically.

HTMX is perfect for this. It builds on the what a <form> element already does by default. It gathers up inputs, creates a POST request, and then expects an HTML response, which it renders.

The difference is that, in HTMX, any element can initiate any HTTP verb (GET, POST, DELETE, etc) and then the response can replace just that element on the page (or be Out of Band and replace a different element). This behavior is extended to websockets, which can send partial HTML to be swapped in.

To use HTMX on a page, you add a script tag. There is a JavaScript API, but mostly you add custom “hx-*” attributes to elements. It’s meant to feel like HTML. I would say more, but I can’t improve on the HTMX home page, which is succinct and compelling.

My app is meant to allow users to collaboratively score and plan technical debt projects. My intention is to improve on a Google Sheet that I built for my book. So, to start, it needs to have the same collaborative ability. Every user on a team needs to see what the others are doing in real-time. HTMX’s web socket support (backed by Django channels) makes this easy.

Since the wire protocol of an HTMX websocket is just HTML partials, I can use the same template tags from the page templates to build the websocket messages. Each HTML partial has an id attribute that HTMX will use to swap it in. I can send over just the elements that have changed.

Tomorrow, I’ll compare this to React.

Dev Stack 2025, Part X: networking

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

This last part about my new dev stack environment will be about how my machines are set up to work together. As I mentioned in the introduction to this series, my primary concern was to get development off of my main machine, which is now all on a Framework desktop running Ubuntu.

Before this, my setup was simple. I had a monitor plugged into my laptop over Thunderbolt and my keyboard and mouse were attached via the USB hub the monitor provided (keyboard) or bluetooth (mouse). When I introduced the Framework, I moved to a USB mouse in the hub, and now I could switch my whole setup from Mac to Framework by unplugging/plugging in one USB-C cable.

But I had a few development use cases that this didn’t support well:

  1. I sometimes need to code with someone over Zoom. My webcam, mic, and headphones are staying connected to the Mac.
  2. I regularly program outside of my office in co-working environments.
  3. I need to support programming while traveling.
  4. I want to be able to go back and forth to between the machines while working at my desk.

To start with, I tried using remote desktop. There’s an official client for Mac made by Microsoft and it’s built into Ubuntu. As I mentioned in my Linux post, I was surprised at how hard this way to troubleshoot. The issue is that you can’t RDP to a Linux box unless it is actively connected to a monitor. So, at first I just left it plugged in while taking the laptop outside. But, this was not ideal.

There are a few solutions for this, but the easiest for me was just buying a virtual HDMI plug. They are cheap and fool the machine into thinking it has a monitor.

To even get RDP to work at all though I needed to make some way for the two machines to see each other. Even in my home office, I put them on different networks on my router. But, I would also need to solve this for when I’m using my laptop outside of my network. This is what Tailscale was made for.

Tailscale is a VPN, but what sets it apart is its UX. You install it on the two machines, log them in to Tailscale, and now they are on a virtual private subnet. I can RDP at my desk or from a café. I can share the Mac “space” that is running RDP over Zoom. The setup was trivial.

So far this has been fine. I don’t even notice the VPN when I am coding at home. When I am outside, it’s a little sluggish, but fine. AI coding makes it more acceptable, since I don’t have to type and navigate code as much.