Moving from React to HTMX

I have been building web applications in React until very recently, but now I’ve moved to HTMX. Here are the main differences so far. For reference, my React style was client-side only for web-applications.

  1. Client State: In React, I used Redux, which had almost all of the client state. Some state that I needed to be sticky was put into cookies. There was some transient state in useState variables in components. Redux was essentially a cache of the server database. In HTMX, the client state is in the HTML mostly in a style called Hypermedia As the Engine of Application State (HATEOAS). Right now, I do cheat a little, sending a small JavaScript object in a script tag for my d3 visualizations to use.
  2. Wire Protocol: To feed the React applications, I had been using a GraphQL API. In HTMX, we expect HTML from REST responses and even over the web-socket.
  3. DOM Updates: In React, I used a classic one-way cycle. Some event in React would trigger an API call and optimistic update to Redux. The Redux would trigger React Component re-renders. If the API failed, it would undo the Redux, and then re-render the undo and any additional notifications. In HTMX, the HTML partials sent back from the REST request or websocket use the element’s id and HTMX custom attributes to swap out parts of the DOM.
  4. Markup Reuse: In React, the way to reuse snippets of markup is by building components. In HTMX, you do this with whatever features your server language and web-framework provide. I am using Django, so I use tag templates or simple template inclusions. Aesthetically, I prefer JSX over the {%%} syntax in templates, but it’s not a big deal. There are other affordances for reuse in Django/Python, but those are the two I lean on the most.
  5. Debugging: In React, I mostly relied on browser developer tools, but it required me to mentally map markup to my source. This was mostly caused by my reliance on component frameworks, like ReactNative Paper and Material. In HTMX, the source and browser page are a very close match because I am using simpler markup and Bulma to style it. It’s trivial to debug my JavaScript now because it’s all just code I wrote.
  6. UI Testing: In React, I used react-testing to test the generated DOM. In Django, I am using their testing framework to test Django views and the HTML generated by templates. Neither is doing a snapshot to make sure it “looks right”. These tests are more resilient than that and are making sure the content and local state is correct. I could use Playwright for both (to test browser rendering) and it would be very similar.

Also, in general, I should also mention that the client-server architecture of my projects is also quite different. In React, I was building a fat-client, mobile-web app that I was designing so that it could work offline. The GraphQL API was purely a data-access and update layer. All application logic was in the client-side. All UI updates were done optimistically (i.e. on the client side, assuming the API update would succeed), so in an offline version, I could queue up the server calls for later.

My Django/HTMX app could never be offline. There is essentially no logic in the client side. The JavaScript I write is for custom visualizations that I am building in d3. They are generic and are fed with data from the server. Their interactions are not application specific (e.g. tooltips or filters).

This difference has more to do with what I am building, but if I needed a future where offline was possible, I would not choose HTMX (or server rendered React).

Early Thoughts on HTMX

I found out about HTMX about a year ago from a local software development friend whose judgement I trust. His use case was that he inherited a large PHP web application with very standard request/response page-based UI, and he had a need to add some Single Page Application (SPA) style interactions to it. He was also limited (by the user base) to not change the application drastically.

HTMX is perfect for this. It builds on the what a <form> element already does by default. It gathers up inputs, creates a POST request, and then expects an HTML response, which it renders.

The difference is that, in HTMX, any element can initiate any HTTP verb (GET, POST, DELETE, etc) and then the response can replace just that element on the page (or be Out of Band and replace a different element). This behavior is extended to websockets, which can send partial HTML to be swapped in.

To use HTMX on a page, you add a script tag. There is a JavaScript API, but mostly you add custom “hx-*” attributes to elements. It’s meant to feel like HTML. I would say more, but I can’t improve on the HTMX home page, which is succinct and compelling.

My app is meant to allow users to collaboratively score and plan technical debt projects. My intention is to improve on a Google Sheet that I built for my book. So, to start, it needs to have the same collaborative ability. Every user on a team needs to see what the others are doing in real-time. HTMX’s web socket support (backed by Django channels) makes this easy.

Since the wire protocol of an HTMX websocket is just HTML partials, I can use the same template tags from the page templates to build the websocket messages. Each HTML partial has an id attribute that HTMX will use to swap it in. I can send over just the elements that have changed.

Tomorrow, I’ll compare this to React.

Dev Stack 2025, Part X: networking

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

This last part about my new dev stack environment will be about how my machines are set up to work together. As I mentioned in the introduction to this series, my primary concern was to get development off of my main machine, which is now all on a Framework desktop running Ubuntu.

Before this, my setup was simple. I had a monitor plugged into my laptop over Thunderbolt and my keyboard and mouse were attached via the USB hub the monitor provided (keyboard) or bluetooth (mouse). When I introduced the Framework, I moved to a USB mouse in the hub, and now I could switch my whole setup from Mac to Framework by unplugging/plugging in one USB-C cable.

But I had a few development use cases that this didn’t support well:

  1. I sometimes need to code with someone over Zoom. My webcam, mic, and headphones are staying connected to the Mac.
  2. I regularly program outside of my office in co-working environments.
  3. I need to support programming while traveling.
  4. I want to be able to go back and forth to between the machines while working at my desk.

To start with, I tried using remote desktop. There’s an official client for Mac made by Microsoft and it’s built into Ubuntu. As I mentioned in my Linux post, I was surprised at how hard this way to troubleshoot. The issue is that you can’t RDP to a Linux box unless it is actively connected to a monitor. So, at first I just left it plugged in while taking the laptop outside. But, this was not ideal.

There are a few solutions for this, but the easiest for me was just buying a virtual HDMI plug. They are cheap and fool the machine into thinking it has a monitor.

To even get RDP to work at all though I needed to make some way for the two machines to see each other. Even in my home office, I put them on different networks on my router. But, I would also need to solve this for when I’m using my laptop outside of my network. This is what Tailscale was made for.

Tailscale is a VPN, but what sets it apart is its UX. You install it on the two machines, log them in to Tailscale, and now they are on a virtual private subnet. I can RDP at my desk or from a café. I can share the Mac “space” that is running RDP over Zoom. The setup was trivial.

So far this has been fine. I don’t even notice the VPN when I am coding at home. When I am outside, it’s a little sluggish, but fine. AI coding makes it more acceptable, since I don’t have to type and navigate code as much.

Dev Stack 2025, Part IX: tooling

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

This part will be a catch-all for VSCode extensions and other tools. Some are new to me, some came over from other projects.

  1. coverage.py – I use this on Page-o-Mat, which is also in Python.
  2. Coverage Gutters – this shows coverage right in VSCode. It works with anything that produces standard coverage files, so it works well with coverage.py. I wrote about how I use that here and in my book.
  3. mutmut – This is something I have been playing around with in Page-o-Mat because of my interest in mutation testing. I contributed a feature a few months ago, which I’ll cover later this month. I’ll be using it more seriously now.
  4. flake8 and black – for linting and auto-formatting. This is more necessary as I use AI since its style adherence isn’t perfect.

I still haven’t figured out what I will do for JS testing. I used jest before, but it doesn’t meet my criteria of low-dependencies. Might have to start with this gist for TestMan.

I also need to replace my code complexity extension (it was for JS/TS). I might see how long I can last without it because the main replacements don’t have enough usage to consider installing (VSCode extensions are another hacking vector, like supply chain attacks).

Dev Stack 2025, Part VIII: uv

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

This one is easy. Before my latest project, I used pyenv, virtualenv, and then pip with requirements.txt for Python projects. But, since I am friends with Becky Sweger, and read this post about uv, I knew better (but didn’t yet overcome my inertia). Starting fresh meant that I could finally get on modern tools.

I could write more about why, but I am not going to do better than Becky, so go to her blog where she has a uv category with all of her thoughts on it.

Dev Stack 2025, Part VII: Sqlite

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

Since Django uses an ORM, switching between databases is relatively easy. I usually pick MySQL, but I’m going to see how far I can get with Sqlite.

The project I am working on is to make a better version of the tech debt spreadsheet that I share in my book (sign up for the email list to get a link and a guide for using it). The app is very likely to be open-source and to start out as something you host yourself. So, I think Sqlite will be fine, but if it ever gets to the point where it won’t work, then switching to MySQL or Postgres shouldn’t be that hard. My DB needs are simple and well within the Django ORM’s capabilities.

Even if I decide to host a version, I might decide on a DB per tenant model, which might be ok for Sqlite. Another possibility is that it would be something in the Jira Marketplace, and in that case, I’d have to rewrite the backend to use Jira for storage, but that wouldn’t be that bad because (given the Jira data-model) I only need to add some custom fields to an issue. Most of the app at that point would be the visualizations and an expert system.

One nice thing about Sqlite is that it’s trivial to host. It’s just a few files (with WAL mode). It’s also trivial to run unit-tests against during development. You can do it in-memory, which is what Django testing does by default. I can also run those test suites against more powerful databases to make sure everything works with them too.

One portability issue is that if I get used to running against Sqlite, I will probably not notice performance issues. Since Sqlite is just some local files, it’s incredibly fast. You can feel free to do lots of little queries to service a request and not notice any latency issues. The same style over a network, potentially to a different datacenter, won’t work as well.

But I have seen enough evidence of production SaaS products using Sqlite, that I think I can get to hundreds of teams without worrying too much. I would love to have a performance problem at that point.

In my book, I talk about how technical debt is the result of making correct decisions and then having wild success (that invalidate those choices). I don’t like calling these decisions “shortcuts” because that word is used a pejorative in this context. Instead, I argue that planning for the future might have prevented the success. If this project is successful, it’s likely that Sqlite won’t be part of it any more, but right now it’s enabling me to get to first version, and that’s good enough.

Dev Stack 2025, Part VI: Bulma

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

In my drive for simplicity I have decided to have no build for scripts and CSS. This means I can’t use Tailwind, which I would otherwise choose.

In my research, I found a few options, and I have tentatively chosen Bulma. Aside from having no build, it’s other strength is that Copilot knows it well enough to help me use it.

I also considered Pico and Bootstrap. I preferred Bulma’s default look to Pico and I have already used Bootstrap in the past, so I basically know what to expect. I chose Bulma to see how it compares. If it falls short, I’ll move to Bootstrap. I’m pretty sure that Copilot will know it.

It’s worth saying here that if I had chosen Ruby on Rails instead of Python/Django, Hotwire would have been a sane default choice and would have played the role that HTMX and Bulma are playing for me.

Dev Stack 2025, Part V: VSCode and Copilot

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

I switched to Cursor in February because its prompting capabilities were way beyond Copilot. But, I’ve increasingly become frustrated with everything else about it.

Even though I use AI to generate code, I still need to fix that code myself. I also tend to do refactoring myself. So, the IDE’s regular coding features are still important to my work. Since Cursor is a fork of VSCode, moving to it was simple, but their fork is starting to age, and it doesn’t look like they can keep up with VSCode.

The first thing I noticed was that they could no longer load the latest versions of extensions I use. When I researched why, it turned out it was because they were not merging in VSCode changes any more. When 2.0 came out last week and the extensions were still stuck, I knew they didn’t share my security priorities. Not being able to update something is a huge red flag.

So, just to check, I tried out VSCode again. They could load the latest versions of extensions (of course), but I also noticed lots of little improvements to the UI. The most striking was the speed. But, also, the exact timing of auto-complete suggestions was less intrusive than Cursor. They could both use some improvement, but by default, Copilot was a little less anxious to complete, which suits me better.

But this switch would not have been possible if the prompt-based coding was worse than Cursor. So far, in the week I have been using it, I haven’t noticed a difference. They are both not perfect, but that’s fine with me.

Ten months ago, Copilot wasn’t worth using. Now, it feels the same as Cursor. I don’t know if that might also be because my prompting has improved, but it doesn’t matter. My goal is to add a CLI based agent to my stack, so I think I would close any gap that way.

In my drive to simplify and reduce dependencies, it’s also good to be able to remove a vendor. I have to rely on Microsoft already, and I trust them, so moving to just VSCode/Copilot is a plus. I was pretty sure this was going to happen.

In April, after two months on Cursor, I wrote:

The problem for Cursor in competing with Microsoft is that Microsoft has no disincentive to follow them. [… And] because Cursor is easy to switch back from, there is actually no advantage to Cursor’s land grab. I went from VSCode with Copilot to Cursor in 20 minutes, and I could go back faster. I can run them in parallel.

Here are Microsoft’s other incumbent advantages:

  1. Infinite money
  2. Azure (gives them at-cost compute)
  3. Experience with AI engineering (built up from years of working with OpenAI)
  4. The relationship with OpenAI which gives them low-cost models
  5. 50 years of proprietary code (could this augment models?)
  6. Developer Tools expertise (and no one is close — maybe JetBrains)
  7. GitHub
  8. Control of Typescript and C#
  9. Control of VSCode (which they are flexing)

In the end, #6 might not be possible for anyone else to overcome, and it’s why I’m back.

Dev Stack 2025, Part IV: HTMX

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

I have been a fan of server-authoritative UI since the ’90s and have worked to make it more interactive. The general idea is that there is no application code running on the client and that the server handles all events and renders updates.

Regular HTML webpages with no JavaScript are an example of this style. So are 60’s-style mainframes with dumb terminals. There are several systemic advantages to this architecture, but one big disadvantage is granular interactivity. In the past four years, I went the complete opposite way by using React and essentially building a fat-client in the browser. But, when I saw HTMX last year, I thought I could go back at some point.

That point is now.

Everything is on the table, and since I will not use NPM, it makes it harder to use React. My drive to simplicity just won’t accept the dependency footprint any more. HTMX is dependency-free. Exactly what I want.

HTMX is HTML with some extensions that make it possible for the server to update a page without reloading it, either in a REST request or over a web-socket. The wire-protocol is HTML partials that replace elements in your DOM.

I started an application in it three weeks ago that I’ll talk about after this series. Tomorrow, I want to talk about why I am going back to VSCode/Copilot after switching to Cursor earlier this year.

Intrinsically Safe

Twenty-five years ago, I was at a startup making mobile apps for a chemical company. Their CTO explained the concept of Intrinsically Safe to me. The apps we made would run on devices that were custom built so that they could never cause an accident. This meant that if they were dropped, they wouldn’t spark and cause a fire. Only intrinsically safe objects could be brought inside the factory.

We (at the startup) loved this, so we adopted phrase “Intrinsically Safe” to describe our product (an SDK for making web/mobile applications) because it fit.

In our system, the programmer never wrote code that went to the client side, so it was always safe to run an app made with it. This is more than just a sandbox—it was intrinsically safe because app code only ran on the server. We need to apply this idea (separating system and application code) to vibe coding.

We need new applications and frameworks that are opinionated on the technical details and let non-coders specify the application logic only. When I look at vibed code, those ideas are conflated—you ask for some simple application logic, and the AI might accidentally open a security hole because that code is in the same file.

What would an intrinsically safe system look like? Something like:

For non-coders

1. More emphasis on visual manipulation. Learn from Excel, WebFlow, Notion, AirTable, etc about how to make things that can further be developed with point and click. Let them express themselves in no-code ways (which are intrinsically safe)

2. Full deployment support (like Replit)

3. Let them start with Figma-like tools? (See Kombai)

On the inside:

1. A programming language where you can’t express dangerous constructs. I would like some combo of the correctness spirit of Rust with the dynamism/immutability and system growth spirit of Clojure.

2. In my experience, AI seems to be a little better at code with Types. So, maybe Clojure/Spec and partial types

3. Or maybe something like Eve where your application is driven by (intrinsically safe) data constructs

4. A very opinionated auth, roles/responsibilities, multi-tenant user system that can be configured without code.

5. An API layer that implements everything we know about rate-limiting, security, etc.

If done right, anything the AI made would be ok to deploy because it’s not building the system. For sure, there will be problems, but whole classes of issues would go away.