Category Archives: Software Development

Dev Stack 2025, Part VI: Bulma

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

In my drive for simplicity I have decided to have no build for scripts and CSS. This means I can’t use Tailwind, which I would otherwise choose.

In my research, I found a few options, and I have tentatively chosen Bulma. Aside from having no build, it’s other strength is that Copilot knows it well enough to help me use it.

I also considered Pico and Bootstrap. I preferred Bulma’s default look to Pico and I have already used Bootstrap in the past, so I basically know what to expect. I chose Bulma to see how it compares. If it falls short, I’ll move to Bootstrap. I’m pretty sure that Copilot will know it.

It’s worth saying here that if I had chosen Ruby on Rails instead of Python/Django, Hotwire would have been a sane default choice and would have played the role that HTMX and Bulma are playing for me.

Dev Stack 2025, Part V: VSCode and Copilot

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

I switched to Cursor in February because its prompting capabilities were way beyond Copilot. But, I’ve increasingly become frustrated with everything else about it.

Even though I use AI to generate code, I still need to fix that code myself. I also tend to do refactoring myself. So, the IDE’s regular coding features are still important to my work. Since Cursor is a fork of VSCode, moving to it was simple, but their fork is starting to age, and it doesn’t look like they can keep up with VSCode.

The first thing I noticed was that they could no longer load the latest versions of extensions I use. When I researched why, it turned out it was because they were not merging in VSCode changes any more. When 2.0 came out last week and the extensions were still stuck, I knew they didn’t share my security priorities. Not being able to update something is a huge red flag.

So, just to check, I tried out VSCode again. They could load the latest versions of extensions (of course), but I also noticed lots of little improvements to the UI. The most striking was the speed. But, also, the exact timing of auto-complete suggestions was less intrusive than Cursor. They could both use some improvement, but by default, Copilot was a little less anxious to complete, which suits me better.

But this switch would not have been possible if the prompt-based coding was worse than Cursor. So far, in the week I have been using it, I haven’t noticed a difference. They are both not perfect, but that’s fine with me.

Ten months ago, Copilot wasn’t worth using. Now, it feels the same as Cursor. I don’t know if that might also be because my prompting has improved, but it doesn’t matter. My goal is to add a CLI based agent to my stack, so I think I would close any gap that way.

In my drive to simplify and reduce dependencies, it’s also good to be able to remove a vendor. I have to rely on Microsoft already, and I trust them, so moving to just VSCode/Copilot is a plus. I was pretty sure this was going to happen.

In April, after two months on Cursor, I wrote:

The problem for Cursor in competing with Microsoft is that Microsoft has no disincentive to follow them. [… And] because Cursor is easy to switch back from, there is actually no advantage to Cursor’s land grab. I went from VSCode with Copilot to Cursor in 20 minutes, and I could go back faster. I can run them in parallel.

Here are Microsoft’s other incumbent advantages:

  1. Infinite money
  2. Azure (gives them at-cost compute)
  3. Experience with AI engineering (built up from years of working with OpenAI)
  4. The relationship with OpenAI which gives them low-cost models
  5. 50 years of proprietary code (could this augment models?)
  6. Developer Tools expertise (and no one is close — maybe JetBrains)
  7. GitHub
  8. Control of Typescript and C#
  9. Control of VSCode (which they are flexing)

In the end, #6 might not be possible for anyone else to overcome, and it’s why I’m back.

Dev Stack 2025, Part IV: HTMX

This is part of a series describing how I am changing my entire stack for developing web applications. My choices are driven by security and simplicity.

I have been a fan of server-authoritative UI since the ’90s and have worked to make it more interactive. The general idea is that there is no application code running on the client and that the server handles all events and renders updates.

Regular HTML webpages with no JavaScript are an example of this style. So are 60’s-style mainframes with dumb terminals. There are several systemic advantages to this architecture, but one big disadvantage is granular interactivity. In the past four years, I went the complete opposite way by using React and essentially building a fat-client in the browser. But, when I saw HTMX last year, I thought I could go back at some point.

That point is now.

Everything is on the table, and since I will not use NPM, it makes it harder to use React. My drive to simplicity just won’t accept the dependency footprint any more. HTMX is dependency-free. Exactly what I want.

HTMX is HTML with some extensions that make it possible for the server to update a page without reloading it, either in a REST request or over a web-socket. The wire-protocol is HTML partials that replace elements in your DOM.

I started an application in it three weeks ago that I’ll talk about after this series. Tomorrow, I want to talk about why I am going back to VSCode/Copilot after switching to Cursor earlier this year.

Intrinsically Safe

Twenty-five years ago, I was at a startup making mobile apps for a chemical company. Their CTO explained the concept of Intrinsically Safe to me. The apps we made would run on devices that were custom built so that they could never cause an accident. This meant that if they were dropped, they wouldn’t spark and cause a fire. Only intrinsically safe objects could be brought inside the factory.

We (at the startup) loved this, so we adopted phrase “Intrinsically Safe” to describe our product (an SDK for making web/mobile applications) because it fit.

In our system, the programmer never wrote code that went to the client side, so it was always safe to run an app made with it. This is more than just a sandbox—it was intrinsically safe because app code only ran on the server. We need to apply this idea (separating system and application code) to vibe coding.

We need new applications and frameworks that are opinionated on the technical details and let non-coders specify the application logic only. When I look at vibed code, those ideas are conflated—you ask for some simple application logic, and the AI might accidentally open a security hole because that code is in the same file.

What would an intrinsically safe system look like? Something like:

For non-coders

1. More emphasis on visual manipulation. Learn from Excel, WebFlow, Notion, AirTable, etc about how to make things that can further be developed with point and click. Let them express themselves in no-code ways (which are intrinsically safe)

2. Full deployment support (like Replit)

3. Let them start with Figma-like tools? (See Kombai)

On the inside:

1. A programming language where you can’t express dangerous constructs. I would like some combo of the correctness spirit of Rust with the dynamism/immutability and system growth spirit of Clojure.

2. In my experience, AI seems to be a little better at code with Types. So, maybe Clojure/Spec and partial types

3. Or maybe something like Eve where your application is driven by (intrinsically safe) data constructs

4. A very opinionated auth, roles/responsibilities, multi-tenant user system that can be configured without code.

5. An API layer that implements everything we know about rate-limiting, security, etc.

If done right, anything the AI made would be ok to deploy because it’s not building the system. For sure, there will be problems, but whole classes of issues would go away.

Dev Stack 2025: Part III, Django

I learned Django in 2006 and used it as main way to make web applications for side projects until 2021, when I decided to move to node/express for my backend. It’s time to go back.

As I mentioned yesterday, my stack changes are driven by the prevalence of supply chain attacks and my interest in Agentic AI software development. NPM/Node seems especially vulnerable to these attacks, which is why I am leaving that ecosystem. I considered Rails and Django. In the end, even though I think Rails may be doing more things right, I already know Python, use it for other projects, and Django is close enough.

To me, the main reason to pick Rails or Django in 2025 is that it provides good defaults that can be constraints for using AI. When I see vibe coded projects, the AI they use prefers node/express, which lets them do anything, including things that they shouldn’t do. It doesn’t seem to impose or learn any patterns. These constraints also help me not mess things up or notice when the AI is making mistakes.

In my Django app, authentication and an admin panel are built-in. I don’t need to rely on the AI to build it for me. This also means that we (the AI and I) can’t mess it up.

I have also decided to move away from React (which I will really miss), but again, its dependency story is too scary for me. I am going with HTMX and server-based UI (something I have been trying to return back to). I’ll tell you why tomorrow.

Dev Stack 2025: Part II – Linux

I have been developing on a Mac full-time since 2013, but I’m rethinking how I do everything. Switching to Linux was the easiest choice.

To me, software development is becoming too dangerous to do on my main machine. Supply chain attacks and agentic AI are both hacking and data destruction vectors and need to be constrained. Given that, I decided to build a machine that was truly like factory equipment. It would only do development on it and give it very limited access.

I wanted it to be a desktop to maximize the power per dollar, and since I don’t do any iOS development anymore, there was no reason not to pick Linux.

Mac laptops are perfect for my usage as a consumer computer user. The battery life and track pad are unmatched. My MacBook Air is easy to transport and powerful enough. But, as a desktop development machine, Macs are good, but not worth the money for me. I decided to try a Framework instead, which I might be able to upgrade as it ages.

When I got it, I tried an Arch variant first, but it was too alien to me, so I reformatted the drive and installed Ubuntu. I spend almost all of my time in an IDE, browser, and terminals, and Ubuntu is familiar enough.

Having not used a Linux desktop before, here’s what struck me:

  1. Installing applications from a .deb or through the AppCenter is surprisingly more fraught than you’d think. It seems easy to install something malicious through typos or untrusted developers in App Center. Say what you want about the App Store, but apps you install must be signed.
  2. Speaking of App Center: its UI flickers quite a lot. Hard to believe they shipped this.
  3. Generally, even though I have never used Ubuntu Desktop, it was intuitive. The default .bashrc was decent.
  4. I like the way the UI looks, and I’m confident that if I didn’t, I could change it. I need that now that my taste and Apple’s are starting to diverge.
  5. I still use a Mac a lot, so getting used to CTRL (on Ubuntu) vs. CMD (on Mac) is a pain.
  6. I was surprised that I need to have a monitor attached to the desktop in order to Remote Desktop to it (by default).

In any case, I set up Tailscale, so using this new desktop remotely from my Mac is easy when I want to work outside of my office.

My next big change was to go back to Django for web application development (away from node/express). I’ll discuss why tomorrow.

Changing my Dev Stack (2025), Part I: Simplify, Simplify

My life used to be easy. I was an iOS developer from 2014-2021. To do that, I just needed to know Objective-C and then Swift. Apple provided a default way to make UI and it was fine.

But, in 2021, when I went independent, I decided to abandon iOS and move to a React/Node stack for web applications (that I wanted to be SPA). I chose React/ReactNative. It was fine, but I have to move on.

The main reason is the complexity. For my application, which is simple, there are an insane amount of dependencies (which are immediate tech debt, IMO). Hello World in Typescript/Node/React will put 36,000 files (at last count) in node_modules. This reality has become a prime target for hackers who are using this as a vector for supply chain attacks. It’s clear that the node community is not prepared for this, so I have to go.

This is a major shift for me, so I am rethinking everything. This was my criteria:

  1. Minimal dependencies
  2. No build for JS or CSS
  3. Security protection against dependencies
  4. Bonus if I already know how to do it

The first, and easiest decision, was that I was going to move all development from my Mac to Linux. I’ll talk about this tomorrow in Part II.

Code Coverage Talk at STARWEST

Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.

The key points are that metrics should be:

  • able drive decisions.
  • combined to make them multi-dimensional.
  • based on leading indicators that will align to lagging indicators.

And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.

As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.

Algorithmic Code Needs A Lot of Comments

I recently read an online debate between Bob Martin and John Ousterhout about the best way to write (or re-write) the prime number algorithm in The Art of Computer Programming by Donald Knuth. Having read the three implementations, I think Knuth’s original is the best version for this kind of code and the two rewrites lose a lot in translation.

My personal coding style is closer to Ousterhout’s, but that’s for application code. Algorithmic code, like a prime number generator is very different. For most application code, the runtime performance will be fine for anything reasonable, and the most important thing to ensure is that the code is easy to change, because it will change a lot. Algorithmic code rarely changes, and the most likely thing you would do is a total rewrite to a better algorithm.

I have had to maintain a giant codebase of algorithmic code. In the mid to late 2000’s, I worked at Atalasoft, which was a provider of .NET SDKs for Photo and Document Imaging. We had a lot of image processing algorithms written in C/C++.

In the six or so years I was there, this code rarely changed. It was extensively tested with a large database of images to make sure it didn’t when we updated dependencies or the compiler. The main two reasons why we would change this code was to (a) fix an edge case or (b) improve performance.

The most important thing that this code could have that would help is a lot of documentation. It was very unlikely that the coder would know what this code was doing. It probably would have been years since its last change, and unless it was our CTO making the change, there is no way anyone could understand it quickly just from reading the code. We needed this code to run as fast as possible, and so it probably used C performance optimization tricks that obfuscated the code.

Both Ousterhout and Martin rewrote the code in ways that would probably make it slower at the extremes, which is not what you want to do with algorithmic code. Martin’s penchant for decomposition is especially not useful here.

Worse than that, they both made the code much harder to understand by removing most of the documentation. I think they both admitted that they didn’t totally understand the algorithm, so I’m not sure why they think reducing documentation would be a good idea.

To be fair, this code is just not a good candidate to apply either Ousterhout’s or Martin’s techniques, which are more about API design. Knuth described the goal of his programming style this way:

If my claims for the advantages of literate programming have any merit, you should be able to understand the following description more easily than you could have understood the same program when presented in a more conventional way.

In general, I would not like to have to maintain code with Knuth’s style if the code needed to be changed a lot. But, for algorithmic code, like in his books or in an image processing library, it’s perfect.

Make a Programmer, Not a Program

From what I have seen, pure vibe coding isn’t good enough to produce production software that is deployed to the public web. This is hard enough for humans. Even though nearly every major security or outage was caused by people, it’s clear that that’s just because we haven’t been deploying purely vibe coded programs at scale.

But, it’s undeniable that vibe coding is useful, and that it would be great if we could take it all of the way to launch. Until then, it’s up to the non-programming vibe coder to level up and close the gap. Luckily, the same tools they use to make programs can also be used to make them into programmers.

Here’s what I suggest: Try asking for very small updates and then reading just that difference. In Replit, you would go to the git tab and click the last commit to see what changed. Then, read what the agent actually said about what it did. See if you can make a very related change yourself. For example, getting spacing exactly right or experimenting with different colors by updating the code yourself.

Do this to get comfortable reading the diffs and to eventually be able to read the code. The next step would be being able to notice that code is wrong, which is most of what I do these days.