Author Archives: Lou Franco

Dev Stack 2025: Part III, Django

I learned Django in 2006 and used it as main way to make web applications for side projects until 2021, when I decided to move to node/express for my backend. It’s time to go back.

As I mentioned yesterday, my stack changes are driven by the prevalence of supply chain attacks and my interest in Agentic AI software development. NPM/Node seems especially vulnerable to these attacks, which is why I am leaving that ecosystem. I considered Rails and Django. In the end, even though I think Rails may be doing more things right, I already know Python, use it for other projects, and Django is close enough.

To me, the main reason to pick Rails or Django in 2025 is that it provides good defaults that can be constraints for using AI. When I see vibe coded projects, the AI they use prefers node/express, which lets them do anything, including things that they shouldn’t do. It doesn’t seem to impose or learn any patterns. These constraints also help me not mess things up or notice when the AI is making mistakes.

In my Django app, authentication and an admin panel are built-in. I don’t need to rely on the AI to build it for me. This also means that we (the AI and I) can’t mess it up.

I have also decided to move away from React (which I will really miss), but again, its dependency story is too scary for me. I am going with HTMX and server-based UI (something I have been trying to return back to). I’ll tell you why tomorrow.

Dev Stack 2025: Part II – Linux

I have been developing on a Mac full-time since 2013, but I’m rethinking how I do everything. Switching to Linux was the easiest choice.

To me, software development is becoming too dangerous to do on my main machine. Supply chain attacks and agentic AI are both hacking and data destruction vectors and need to be constrained. Given that, I decided to build a machine that was truly like factory equipment. It would only do development on it and give it very limited access.

I wanted it to be a desktop to maximize the power per dollar, and since I don’t do any iOS development anymore, there was no reason not to pick Linux.

Mac laptops are perfect for my usage as a consumer computer user. The battery life and track pad are unmatched. My MacBook Air is easy to transport and powerful enough. But, as a desktop development machine, Macs are good, but not worth the money for me. I decided to try a Framework instead, which I might be able to upgrade as it ages.

When I got it, I tried an Arch variant first, but it was too alien to me, so I reformatted the drive and installed Ubuntu. I spend almost all of my time in an IDE, browser, and terminals, and Ubuntu is familiar enough.

Having not used a Linux desktop before, here’s what struck me:

  1. Installing applications from a .deb or through the AppCenter is surprisingly more fraught than you’d think. It seems easy to install something malicious through typos or untrusted developers in App Center. Say what you want about the App Store, but apps you install must be signed.
  2. Speaking of App Center: its UI flickers quite a lot. Hard to believe they shipped this.
  3. Generally, even though I have never used Ubuntu Desktop, it was intuitive. The default .bashrc was decent.
  4. I like the way the UI looks, and I’m confident that if I didn’t, I could change it. I need that now that my taste and Apple’s are starting to diverge.
  5. I still use a Mac a lot, so getting used to CTRL (on Ubuntu) vs. CMD (on Mac) is a pain.
  6. I was surprised that I need to have a monitor attached to the desktop in order to Remote Desktop to it (by default).

In any case, I set up Tailscale, so using this new desktop remotely from my Mac is easy when I want to work outside of my office.

My next big change was to go back to Django for web application development (away from node/express). I’ll discuss why tomorrow.

Changing my Dev Stack (2025), Part I: Simplify, Simplify

My life used to be easy. I was an iOS developer from 2014-2021. To do that, I just needed to know Objective-C and then Swift. Apple provided a default way to make UI and it was fine.

But, in 2021, when I went independent, I decided to abandon iOS and move to a React/Node stack for web applications (that I wanted to be SPA). I chose React/ReactNative. It was fine, but I have to move on.

The main reason is the complexity. For my application, which is simple, there are an insane amount of dependencies (which are immediate tech debt, IMO). Hello World in Typescript/Node/React will put 36,000 files (at last count) in node_modules. This reality has become a prime target for hackers who are using this as a vector for supply chain attacks. It’s clear that the node community is not prepared for this, so I have to go.

This is a major shift for me, so I am rethinking everything. This was my criteria:

  1. Minimal dependencies
  2. No build for JS or CSS
  3. Security protection against dependencies
  4. Bonus if I already know how to do it

The first, and easiest decision, was that I was going to move all development from my Mac to Linux. I’ll talk about this tomorrow in Part II.

NaBloWriMo 2025

In October, I committed to write a blog every day in November as a kind of NaNoWriMo, but geared to what I usually write. Of course, when November 1 came, I forgot.

Luckily, I am ok with backdating blog posts, so today (November 5), I wrote for November 1-4. I looked through my drafts and picked a few to finish, but it doesn’t look like I have any other drafts good enough to develop.

Here are some things that are going on with me recently (that I plan to write about).

  1. I have gone all-in on Python for web development.
  2. I started a project to implement the ideas in my book, Swimming in Tech Debt.
  3. I am about five months into learning Spanish.
  4. I’m starting on my experiments that will become my 2026 Yearly Theme.
  5. I am almost done with the print version of Swimming in Tech Debt, which should go on sale soon.
  6. I found a mutation tester I like (mutmut) and made a PR to it to add a feature I needed.

Code Coverage Talk at STARWEST

Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.

The key points are that metrics should be:

  • able drive decisions.
  • combined to make them multi-dimensional.
  • based on leading indicators that will align to lagging indicators.

And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.

As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.

Algorithmic Code Needs A Lot of Comments

I recently read an online debate between Bob Martin and John Ousterhout about the best way to write (or re-write) the prime number algorithm in The Art of Computer Programming by Donald Knuth. Having read the three implementations, I think Knuth’s original is the best version for this kind of code and the two rewrites lose a lot in translation.

My personal coding style is closer to Ousterhout’s, but that’s for application code. Algorithmic code, like a prime number generator is very different. For most application code, the runtime performance will be fine for anything reasonable, and the most important thing to ensure is that the code is easy to change, because it will change a lot. Algorithmic code rarely changes, and the most likely thing you would do is a total rewrite to a better algorithm.

I have had to maintain a giant codebase of algorithmic code. In the mid to late 2000’s, I worked at Atalasoft, which was a provider of .NET SDKs for Photo and Document Imaging. We had a lot of image processing algorithms written in C/C++.

In the six or so years I was there, this code rarely changed. It was extensively tested with a large database of images to make sure it didn’t when we updated dependencies or the compiler. The main two reasons why we would change this code was to (a) fix an edge case or (b) improve performance.

The most important thing that this code could have that would help is a lot of documentation. It was very unlikely that the coder would know what this code was doing. It probably would have been years since its last change, and unless it was our CTO making the change, there is no way anyone could understand it quickly just from reading the code. We needed this code to run as fast as possible, and so it probably used C performance optimization tricks that obfuscated the code.

Both Ousterhout and Martin rewrote the code in ways that would probably make it slower at the extremes, which is not what you want to do with algorithmic code. Martin’s penchant for decomposition is especially not useful here.

Worse than that, they both made the code much harder to understand by removing most of the documentation. I think they both admitted that they didn’t totally understand the algorithm, so I’m not sure why they think reducing documentation would be a good idea.

To be fair, this code is just not a good candidate to apply either Ousterhout’s or Martin’s techniques, which are more about API design. Knuth described the goal of his programming style this way:

If my claims for the advantages of literate programming have any merit, you should be able to understand the following description more easily than you could have understood the same program when presented in a more conventional way.

In general, I would not like to have to maintain code with Knuth’s style if the code needed to be changed a lot. But, for algorithmic code, like in his books or in an image processing library, it’s perfect.

Tech Debt is Caused by Correct Behavior

I understand why tech debt is often conflated with “taking a shortcut”, but in my experience, the hardest tech debt problems are caused by doing everything right and having wild success.

In 2006, I joined Atalasoft, which, at that point, was the leading provider of .NET Photo and Document Image SDKs. That success was driven by two early architectural decisions.

The first was to embrace .NET managed code interfaces and to not try to wrap Windows COM DLLs. This meant that every product was packaged as a .NET Assembly with no external dependencies; legacy, C-based image libraries were statically linked, so that meant we had to code the wrappers in Managed C++. This turned out to be the optimal choice in 2002 (as opposed to pure .NET or wrapped COM).

Another (correct) choice was to assume that our web-based code was embedded in an ASP.NET web application, which allowed us to have advanced viewing capabilities that worked seamlessly inside of them.

The company was successful, and that success led to tech debt related issues, which we could mostly handle, but when we were acquired, our core assumptions were invalidated.

In 2011, the company (and our codebase) was over 10 years old, and we had already started exploring how to move from our initial market to support pure managed code and non-.NET web apps. Then, Kofax acquired us and gave us 2.5 years to create a cross-platform port that ran on Java VMs under Linux, well beyond what our architecture currently supported, but in line with our direction and what we had started to do.

When you’re just starting out, shortcuts are not the biggest problem—over-engineering is. You have no users and your goal is to get some, and the best way to do that is to deploy something and get feedback. It’s fine if your solutions don’t scale because scale isn’t part of the spec.

In our case, if we had started with making generic web components that worked in Java and .NET, it would have taken a lot longer and been worse for .NET developers. We might not have been able to convince them to try us. Instead, by going fast, Atalasoft was first to market with .NET imaging components and gained a lot of early mindshare. By the time the incumbent Microsoft component makers ported to .NET, we were already established as .NET-native (and they were wrapped COM components).

But, more instructive was that some new entrants tried a pure .NET approach (rather than our hybrid approach). That architecture (to be clear) is “correct”, but didn’t work at all in 2002. They mostly did not survive and we went there in about 2009.

Make a Programmer, Not a Program

From what I have seen, pure vibe coding isn’t good enough to produce production software that is deployed to the public web. This is hard enough for humans. Even though nearly every major security or outage was caused by people, it’s clear that that’s just because we haven’t been deploying purely vibe coded programs at scale.

But, it’s undeniable that vibe coding is useful, and that it would be great if we could take it all of the way to launch. Until then, it’s up to the non-programming vibe coder to level up and close the gap. Luckily, the same tools they use to make programs can also be used to make them into programmers.

Here’s what I suggest: Try asking for very small updates and then reading just that difference. In Replit, you would go to the git tab and click the last commit to see what changed. Then, read what the agent actually said about what it did. See if you can make a very related change yourself. For example, getting spacing exactly right or experimenting with different colors by updating the code yourself.

Do this to get comfortable reading the diffs and to eventually be able to read the code. The next step would be being able to notice that code is wrong, which is most of what I do these days.

    Teaching Your Book Before You Write It

    In the Useful Authors group, I learned to test my book’s content by teaching it first. To some extent, I did that on this blog. But that doesn’t work as well as doing it in a way where you can get a reaction. This lets you figure out if what you plan to write will resonate with your audience.

    With Swimming in Tech Debt, my main way of teaching the book was to talk about tech debt with my clients and my developer friends. I would also make LinkedIn posts with excerpts from what I was writing. So much of the book, though, is based on past conversations I had had for decades at work. Some of those conversations were meant as “teaching” via training, feedback, or mentorship. A lot of it was just figuring it out as a group.

    I also shared chapters online in places where it was easy to give feedback (like helpthisbook.com). Some readers have invited me to speak to groups inside of their companies. Part 4 of the book (for CTOs) started as a presentation. I was also asked for an excerpt by the Pragmatic Engineer. His audience’s reaction in comments and emails helped shape the book. It let me know which parts were the most useful and worth expanding on.

    One thing I didn’t do early enough was to turn my pre-book notes into conference pitches. I finally did do that after the first draft was done, and next week, I’ll be sharing that with QA professionals at STARWEST.

    In all of these cases, you are the proxy for your book before you write it. You just tell people the things you plan to write. You are hoping that it leads to a conversation where you learn if your ideas are worth writing about.

    My “Show HN” Follow-Up for “Swimming in Tech Debt”

    I am sharing this in the spirit of posts like this, this, and this that give insight into what it’s like to have a popular Show HN post. Like those posts, I have stats, but I didn’t make my post very strategically, so I don’t have much advice about that. What happened to me was more accidental (as I will describe), but I have some ideas on what worked.

    Timeline

    On September 6th, I woke up to find that I had pre-sold 50 copies of Swimming in Tech Debt overnight. For context, at this point I had sold about 125 copies, mostly to people on my mailing list, my personal network, and through LinkedIn. That had started on August 16th with most of the sales in the first week. My daily sales were in the single digits, so the 50 was a big surprise.

    My first instinct was that there was some kind of problem. But, I did a quick search and saw the Hacker News post, so I clicked it.

    Even though I could see that it had a lot of upvotes and discussion, it was surprising to me because I had posted the “Show HN” four days prior. It had gotten a few votes, no comments, and had scrolled off the bottom of the Show HN front page. I had forgotten about it.

    I noticed two things about this post immediately: (1) it had a new date (2) the “Show HN” had been removed from the post title. The post was still attributed to me, but I had not reposted it. I don’t know how this happened, but my post history shows two posts. The newer one eventually had the “Show HN” put back in, but not by me.

    I went into the discussion and saw some good and bad feedback on the book (and also for HelpThisBook.com (HTB)—the site I was using to host the book). To be honest, my initial reaction was defensiveness on the bad feedback. But, I replied to everything in the way I would want addressed: answering questions, thanking people, and explaining my point-of-view to those with criticism.

    Stats

    I am not privy to all of the statistics because I don’t run HelpThisBook (HTB), which I get access to as benefit of being in the Useful Authors Group started by Rob Fitzpatrick, author of Write Useful Books. (Note: our group is running a free 6-week writing sprint starting on September 18th. Hope to see you there).

    Here’s what I can see in the data I have access to:

    There have been 23,000 clicks to this version of the book. I don’t have referral information, but the vast majority have to be from HN (and the various HN mirrors).

    On HTB, I can see readers progressing through the book. A few people finish every day (maybe they buy, I don’t know), and several more are finding it and starting to read each day. They can highlight and give feedback, which they are doing. I used this feature a lot while developing the book (at a much smaller scale) to help make sure the book was engaging readers.

    There is a huge drop-off at the first chapter. Perhaps this is due to the HTB UX (it was somewhat criticized in the HN comments). It is also undoubtedly because of the content itself (and is normal, IMO).

    On the Amazon KDP site, I can see that in the first day, there were over 100 books sold, and as of now, the total since that day is almost 300, with the daily sales being more like 10-20.

    My personal site statistics had a bump compared to the four weeks prior. So far, that has been sustained (but I am also sending more email).

    My mailing list subscribers increased too (the tall bar is 24 new subscribers). I am sending excerpts from the next part each day, which is causing some unsubscribes, but if they don’t like the e-mail, then they definitely won’t like the book. I want to make sure that they have every chance of getting the book at $0.99 if they want it.

    These are modest, but they are very meaningful to me.

    What Makes a Good Show HN Post

    In my experience in reading Show HN, the most important thing is having something worth showing. I hope that that’s the main reason this post did well. But, I can’t deny that something happened (either a glitch or moderator changes) that boosted this post’s chances.

    I also think that early comments (good and bad) also helped it get traction. When I first went to the post, the top comment was a very funny response about writing and tech debt. There were a few very negative posts, which I engaged with respectfully. Since I had already gotten 50 sales, I knew that the book had at least resonated with some. Tech debt is a topic that people have strong feelings about—I think that drove early comments.

    You can’t control any of that, but what you can do is to be ready when it does. Having something for people to do (sign up for a newsletter or buy a book) helps you make something out of the traffic than just hits to your blog. Although HTB was a great choice for gathering feedback from beta readers, if I were posting finished work, I might choose a simpler option where I would have more control over the experience and access to the stats.

    What’s Next

    I just made the final version of the EPUB for Amazon and set the release date to September 16th. My plan is to leave it a $0.99 for a few days as a kind of soft launch. I don’t want to raise the price until it has reviews.

    Then, I will work on the print book. I hope it will be done in October. If you want to be notified of when it is ready, the best way is to sign up to my mailing list. You will also get immediate access to some of the content from Part 3 (HTB only has Parts 1 and 2).