Category Archives: Software Development

Dev Stack 2025, Part IV: HTMX

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

I have been a fan of server-authoritative UI since the ’90s and have worked to make it more interactive. The general idea is that there is no application code running on the client and that the server handles all events and renders updates.

Regular HTML webpages with no JavaScript are an example of this style. So are 60’s-style mainframes with dumb terminals. There are several systemic advantages to this architecture, but one big disadvantage is granular interactivity. In the past four years, I went the complete opposite way by using React and essentially building a fat-client in the browser. But, when I saw HTMX last year, I thought I could go back at some point.

That point is now.

Everything is on the table, and since I will not use NPM, it makes it harder to use React. My drive to simplicity just won’t accept the dependency footprint any more. HTMX is dependency-free. Exactly what I want.

HTMX is HTML with some extensions that make it possible for the server to update a page without reloading it, either in a REST request or over a web-socket. The wire-protocol is HTML partials that replace elements in your DOM.

I started an application in it three weeks ago that I’ll talk about after this series. Tomorrow, I want to talk about why I am going back to VSCode/Copilot after switching to Cursor earlier this year.

Intrinsically Safe

Twenty-five years ago, I was at a startup making mobile apps for a chemical company. Their CTO explained the concept of Intrinsically Safe to me. The apps we made would run on devices that were custom built so that they could never cause an accident. This meant that if they were dropped, they wouldn’t spark and cause a fire. Only intrinsically safe objects could be brought inside the factory.

We (at the startup) loved this, so we adopted phrase “Intrinsically Safe” to describe our product (an SDK for making web/mobile applications) because it fit.

In our system, the programmer never wrote code that went to the client side, so it was always safe to run an app made with it. This is more than just a sandbox—it was intrinsically safe because app code only ran on the server. We need to apply this idea (separating system and application code) to vibe coding.

We need new applications and frameworks that are opinionated on the technical details and let non-coders specify the application logic only. When I look at vibed code, those ideas are conflated—you ask for some simple application logic, and the AI might accidentally open a security hole because that code is in the same file.

What would an intrinsically safe system look like? Something like:

For non-coders

1. More emphasis on visual manipulation. Learn from Excel, WebFlow, Notion, AirTable, etc about how to make things that can further be developed with point and click. Let them express themselves in no-code ways (which are intrinsically safe)

2. Full deployment support (like Replit)

3. Let them start with Figma-like tools? (See Kombai)

On the inside:

1. A programming language where you can’t express dangerous constructs. I would like some combo of the correctness spirit of Rust with the dynamism/immutability and system growth spirit of Clojure.

2. In my experience, AI seems to be a little better at code with Types. So, maybe Clojure/Spec and partial types

3. Or maybe something like Eve where your application is driven by (intrinsically safe) data constructs

4. A very opinionated auth, roles/responsibilities, multi-tenant user system that can be configured without code.

5. An API layer that implements everything we know about rate-limiting, security, etc.

If done right, anything the AI made would be ok to deploy because it’s not building the system. For sure, there will be problems, but whole classes of issues would go away.

Dev Stack 2025: Part III, Django

I learned Django in 2006 and used it as main way to make web applications for side projects until 2021, when I decided to move to node/express for my backend. It’s time to go back.

As I mentioned yesterday, my stack changes are driven by the prevalence of supply chain attacks and my interest in Agentic AI software development. NPM/Node seems especially vulnerable to these attacks, which is why I am leaving that ecosystem. I considered Rails and Django. In the end, even though I think Rails may be doing more things right, I already know Python, use it for other projects, and Django is close enough.

To me, the main reason to pick Rails or Django in 2025 is that it provides good defaults that can be constraints for using AI. When I see vibe coded projects, the AI they use prefers node/express, which lets them do anything, including things that they shouldn’t do. It doesn’t seem to impose or learn any patterns. These constraints also help me not mess things up or notice when the AI is making mistakes.

In my Django app, authentication and an admin panel are built-in. I don’t need to rely on the AI to build it for me. This also means that we (the AI and I) can’t mess it up.

I have also decided to move away from React (which I will really miss), but again, its dependency story is too scary for me. I am going with HTMX and server-based UI (something I have been trying to return back to). I’ll tell you why tomorrow.

Dev Stack 2025: Part II – Linux

I have been developing on a Mac full-time since 2013, but I’m rethinking how I do everything. Switching to Linux was the easiest choice.

To me, software development is becoming too dangerous to do on my main machine. Supply chain attacks and agentic AI are both hacking and data destruction vectors and need to be constrained. Given that, I decided to build a machine that was truly like factory equipment. It would only do development on it and give it very limited access.

I wanted it to be a desktop to maximize the power per dollar, and since I don’t do any iOS development anymore, there was no reason not to pick Linux.

Mac laptops are perfect for my usage as a consumer computer user. The battery life and track pad are unmatched. My MacBook Air is easy to transport and powerful enough. But, as a desktop development machine, Macs are good, but not worth the money for me. I decided to try a Framework instead, which I might be able to upgrade as it ages.

When I got it, I tried an Arch variant first, but it was too alien to me, so I reformatted the drive and installed Ubuntu. I spend almost all of my time in an IDE, browser, and terminals, and Ubuntu is familiar enough.

Having not used a Linux desktop before, here’s what struck me:

  1. Installing applications from a .deb or through the AppCenter is surprisingly more fraught than you’d think. It seems easy to install something malicious through typos or untrusted developers in App Center. Say what you want about the App Store, but apps you install must be signed.
  2. Speaking of App Center: its UI flickers quite a lot. Hard to believe they shipped this.
  3. Generally, even though I have never used Ubuntu Desktop, it was intuitive. The default .bashrc was decent.
  4. I like the way the UI looks, and I’m confident that if I didn’t, I could change it. I need that now that my taste and Apple’s are starting to diverge.
  5. I still use a Mac a lot, so getting used to CTRL (on Ubuntu) vs. CMD (on Mac) is a pain.
  6. I was surprised that I need to have a monitor attached to the desktop in order to Remote Desktop to it (by default).

In any case, I set up Tailscale, so using this new desktop remotely from my Mac is easy when I want to work outside of my office.

My next big change was to go back to Django for web application development (away from node/express). I’ll discuss why tomorrow.

Changing my Dev Stack (2025), Part I: Simplify, Simplify

My life used to be easy. I was an iOS developer from 2014-2021. To do that, I just needed to know Objective-C and then Swift. Apple provided a default way to make UI and it was fine.

But, in 2021, when I went independent, I decided to abandon iOS and move to a React/Node stack for web applications (that I wanted to be SPA). I chose React/ReactNative. It was fine, but I have to move on.

The main reason is the complexity. For my application, which is simple, there are an insane amount of dependencies (which are immediate tech debt, IMO). Hello World in Typescript/Node/React will put 36,000 files (at last count) in node_modules. This reality has become a prime target for hackers who are using this as a vector for supply chain attacks. It’s clear that the node community is not prepared for this, so I have to go.

This is a major shift for me, so I am rethinking everything. This was my criteria:

  1. Minimal dependencies
  2. No build for JS or CSS
  3. Security protection against dependencies
  4. Bonus if I already know how to do it

The first, and easiest decision, was that I was going to move all development from my Mac to Linux. I’ll talk about this tomorrow in Part II.

Code Coverage Talk at STARWEST

Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.

The key points are that metrics should be:

  • able drive decisions.
  • combined to make them multi-dimensional.
  • based on leading indicators that will align to lagging indicators.

And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.

As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.

Algorithmic Code Needs A Lot of Comments

I recently read an online debate between Bob Martin and John Ousterhout about the best way to write (or re-write) the prime number algorithm in The Art of Computer Programming by Donald Knuth. Having read the three implementations, I think Knuth’s original is the best version for this kind of code and the two rewrites lose a lot in translation.

My personal coding style is closer to Ousterhout’s, but that’s for application code. Algorithmic code, like a prime number generator is very different. For most application code, the runtime performance will be fine for anything reasonable, and the most important thing to ensure is that the code is easy to change, because it will change a lot. Algorithmic code rarely changes, and the most likely thing you would do is a total rewrite to a better algorithm.

I have had to maintain a giant codebase of algorithmic code. In the mid to late 2000’s, I worked at Atalasoft, which was a provider of .NET SDKs for Photo and Document Imaging. We had a lot of image processing algorithms written in C/C++.

In the six or so years I was there, this code rarely changed. It was extensively tested with a large database of images to make sure it didn’t when we updated dependencies or the compiler. The main two reasons why we would change this code was to (a) fix an edge case or (b) improve performance.

The most important thing that this code could have that would help is a lot of documentation. It was very unlikely that the coder would know what this code was doing. It probably would have been years since its last change, and unless it was our CTO making the change, there is no way anyone could understand it quickly just from reading the code. We needed this code to run as fast as possible, and so it probably used C performance optimization tricks that obfuscated the code.

Both Ousterhout and Martin rewrote the code in ways that would probably make it slower at the extremes, which is not what you want to do with algorithmic code. Martin’s penchant for decomposition is especially not useful here.

Worse than that, they both made the code much harder to understand by removing most of the documentation. I think they both admitted that they didn’t totally understand the algorithm, so I’m not sure why they think reducing documentation would be a good idea.

To be fair, this code is just not a good candidate to apply either Ousterhout’s or Martin’s techniques, which are more about API design. Knuth described the goal of his programming style this way:

If my claims for the advantages of literate programming have any merit, you should be able to understand the following description more easily than you could have understood the same program when presented in a more conventional way.

In general, I would not like to have to maintain code with Knuth’s style if the code needed to be changed a lot. But, for algorithmic code, like in his books or in an image processing library, it’s perfect.

Make a Programmer, Not a Program

From what I have seen, pure vibe coding isn’t good enough to produce production software that is deployed to the public web. This is hard enough for humans. Even though nearly every major security or outage was caused by people, it’s clear that that’s just because we haven’t been deploying purely vibe coded programs at scale.

But, it’s undeniable that vibe coding is useful, and that it would be great if we could take it all of the way to launch. Until then, it’s up to the non-programming vibe coder to level up and close the gap. Luckily, the same tools they use to make programs can also be used to make them into programmers.

Here’s what I suggest: Try asking for very small updates and then reading just that difference. In Replit, you would go to the git tab and click the last commit to see what changed. Then, read what the agent actually said about what it did. See if you can make a very related change yourself. For example, getting spacing exactly right or experimenting with different colors by updating the code yourself.

Do this to get comfortable reading the diffs and to eventually be able to read the code. The next step would be being able to notice that code is wrong, which is most of what I do these days.

    How to Get Changes Through QA Faster

    In PR Authors Have a lot of Control on PR Idle Time, I made the argument that there is work the author could do before they PR their work that would get a PR review started faster. I followed up in A Good Pull Request Convinces You That it is Correct to show how to make the review faster once it started. The upshot is you do a code review on your own code first and fix problems you find. That work doesn’t take long (an hour?), but shaves off hours and days off the code review.

    The same technique works for QA: Do your own testing and update the issue/bug/story/card in your work database to make it clear what the change was and how you have already tested it (with proof).

    The worst case scenario for a code review and QA are the same: your code has a defect that you should have found. You can do work up-front to make sure this doesn’t happen, and that work is short compared to the wasted time that not doing it will cause.

    I assume that you will test your code before you submit it. Hopefully you do that through automated unit-tests, which should include edge cases. You should go beyond that and anticipate what QA will check.

    Like with code reviews, this extra work takes a couple of hours and potentially saves days of back-and-forth between your testers and you. If you don’t have any ideas of what to test then check with AI chatbots—they are pretty good at this and can even generate the test.

    If you can’t automate the test, then you still need to manually test it when you write it, so it’s a good idea to make some record of this work. For example, for UI code, which is hard to unit test, create a document with before and after screenshots (or make a video showing what you changed).

    These ideas also help with another source of QA feedback—that they don’t even understand what the issue/story/bug is. The way I head that off is by attaching a “Test Plan” document with a description of how to see the change in the application and what specifically was changed. A video works here too.

    When QA finds a problem that I could not have found, then I am relieved. But, when they kick something back because it wasn’t explained well or I made a stupid mistake I could have easily found, I feel guilty that I wasted their time (and mine). I’ve never regretted taking a little time at the end of a task to help it go smoothly through the rest of the process.

    The Infinity-X Programmer

    Forget about the 10-X programmer. I think we’re in a time where AI coding assistants can make you much better than that.

    Even if you think I’m crazy, I don’t think it’s a stretch that some programmers, particularly less experienced ones, will get a big relative boost compared to themselves without AI. Meaning, they could become 10x better using Cursor than they would be if they didn’t use AI at all.

    The norm is less for experienced devs. I think I’m getting about a 2x or 3x improvement for my most AI-amenable tasks. But when I have to do things on projects where I don’t know the language ecosystem as well, it’s much more. So, it’s less about overall skill, and more about familiarity. As long as you know enough to write good prompts, you get more of a multiple the less you know. For example, for my main project, I might save an hour on a 4-hour task, but a junior dev might save days on that same task. Even if I finish it faster this time, they are still going to improve on that same kind of task until we’re about the same.

    But, I also think it’s possible to get very high objective, absolute multipliers against all unassisted programmers with projects that are not even worth trying without the AI assistance.

    I’ve started calling this Infinity-X programming. I’m talking about projects where it would take weeks for a programmer to complete, but no one is sure that it’s worth the time or cost. Using tools like Cursor and Replit, I’ve seen instances where a person with some programming ability (but not enough to program unassisted) do it on the side, working on it for fun just because they want to. They get somewhere fast, and now we might approve more work because we can see the value and it feels tractable. I’ve seen this happen a few times in my network lately.

    It’s not just “non-programmers”. I’m also seeing this among my very experienced programmer colleagues. They are trying very ambitious side-projects that would be way too hard to do alone. They wouldn’t have even tried. But, now, with AI, they can make a lot of progress right away, and that progress spurs them on to do even more.

    Without AI, these bigger projects would be too much of a slog, with too many yak-shaving expeditions, and lots of boring boilerplate and bookkeeping tasks. But, with AI, you get to stay in the zone and have fun, making steady progress the whole way. It makes very big things feel like small things. This is what it feels like to approach infinity.