Write About What You are Doing

On January 6th this year, I finished reading The Practice by Seth Godin, which is a series of arguments trying to get you to ship every day. When I was done, I was convinced. I have now gone over 3 months without missing a day.

The beginning was a pent up list of ideas that had just been sitting in my brain. I brainstormed a bunch of these in a topic list and I am making my way through them.

This has been a good source of blog posts, but honestly, how many of these can I write?

As I look through my recent and upcoming topics, I am starting to see a trend. I am much more writing about things I’m actively working on. Each Monday, I publish a podcast episode, so that’s one post per week that I don’t have to “think up”. About every two weeks, I give an update on App-o-Mat articles. And a couple of days ago, I talked about a new project, Bicycle (an open-source Swift library for modeling interdependent variables).

In these three cases, the thing I am doing is a lot more work than a blog post, but at least the blog post is easy to write. And, in the case of the podcast, I also documented my self-hosting setup in what will probably be three posts, and there was a post about podcast accessibility.

Even the things I am doing are a byproduct of something else. My App-o-Mat articles are lessons I learned from making Sprint-o-Mat. My podcast is about what I have learned by writing this blog—woah, full circle.

Since some of the great works of software were created to make the great works of software writing, I see that making begets tool-making begets making.

And even this post, which would have been impossible to write three months ago, is now easy.

Write While True Episode 7: Find Your Voice

Lately, I’m thinking a lot about what this podcast sounds like. I’m new to podcasting and I’m very aware that I have a lot to do to sound more natural, but that’s not exactly what I’m talking about.

Transcript

Use S3 to Serve Podcast Episodes

I started a podcast about a month ago, and for various reasons I decided to self-host it rather than use a podcast service. I am doing this mainly because I want the episodes to be available indefinitely, even if I stop making new ones, and I don’t want to pay for just hosting. I also don’t care about analytics, and I have the skills and desire to learn how to self-host.

I think this is the wrong choice for almost everyone who podcasts.

But, if you got this far, I will say that it’s probably right not to just put your mp3 files on your web-host. I haven’t really done the math, but these are large files, and if you get any kind of traffic, it will probably be expensive and possibly send you over your caps.

I’ve decided that the minimum I need to do is to use S3. I think it’s probably technically correct to also use a CDN, but I’ll cross that bridge if I get more traffic.

(If you have no idea what S3 or a CDN is, I really recommend you do not go down this route)

There are a lot of good guides out there for the specifics. I used these two:

In addition to setting up a bucket for your .mp3 files and artwork, I suggest you set up a separate bucket for logs and then send web access logs to that bucket. The AWS official docs are good to see how to do this.

By having logs stored you have enough to get some simple analytics. There are services that can read and graph the data in them.

I will post soon about how I scripted a simple way to get episode download counts.

Tech Debt Happens to You

In the original ANSI C as described by K&R, there are a bunch of library functions that use internal state static variables. An example is strtok, which you call with a string to tokenize, and when you want the next token, you call it with NULL. strtok uses internal static variables to keep track of the string and an iterator in it.

In early C usage, this was fine. You had to hope that any 3rd party library call you made while iterating tokens weren’t also using strtok.

But, then when threads were introduced to UNIX and C, this broke down fast. Now, your algorithms couldn’t live in background threads if they used strtok. This specific problem was solved with thread-local variables, but the pervasive use of global state inside of C-functions was a constant source of issues in the most multi-threading and post multi-processor world.

And the world was changing from delivering desktop apps to web apps, so now a lot of your code lived in a multi-threaded back-end that serviced simultaneous requests. This was a problem because in early web development, we took C-libraries out of our desktop apps and made them work in CGI executables or NSAPI/ISAPI web-server extensions (similar to Apache mod_ extensions)

To make this work, we had to use third-party memory allocation libraries because the standard malloc/free/new/delete implementations slowed down as you added more processors (from constant lock contention). Standard reference-counting implementations used normal ++ and -- which aren’t thread-safe, and so we needed to buy a source code implementation of stl that we could alter to use InterlockedIncrement/InterlockedDecrement (which are atomic, lock-free, and thread-safe).

As the world changed around us, we could keep moving forward with these tech-debt payments.

Also, this was slow-paced problem—strtok/malloc/etc were written in the 70s and limped through the 90s. That’s actually not that bad.

But, the world doesn’t stop. Pretty soon, it was just too weird to implement back-ends as ISAPI extensions. So, you pick Java/SOAP because CORBA is just nuts, and well, that’s wrong because REST deprecates that, and then GraphQL deprecates that, and you picked Java, but were you supposed to wait for node/npm? Never mind what’s going on on the front-end as JS and CSS frameworks replace each other every 6 months. Even if you are happy with your choice, are you keeping your dependencies up to date, even through the major revisions that don’t follow Substitutable Versioning?

And I think that this is the main source of tech debt, not intentional debt that you take on or the debt you accumulate from cutting corners due to time constraints. The debt that comes with dependency and environment changes.

Being able to bring code into your project or build on a framework is probably the only thing that makes modern programming possible, but like mortgages, they come with constant interest payments and a looming balloon payment at some point.

There are some you can’t avoid, like the OS, language, and probably database, but as you go down the dependency list, remember to factor in the debt they inevitably bring with them.

Timing Your Tech Debt Payments

It’s impossible to ignore that developers have a visceral reaction against tech debt. Even if they agree that it’s worth it. That’s because they are the ones that need to service the debt.

Tech debt is a cost similar to real-life debt like a mortgage. If you can use tech debt to bring forward revenue and growth, you can pay off the debt later.

But, until then, the interest must be paid.

So, when you are calculating the cost of taking on some debt, a factor in that calculation is how much future work is going to happen on that code. The more work you do, the more interest you pay. If you fix bugs or add features to debt-laden code, you are servicing the debt by making an interest payment. If you refactor, you are paying off principal, and future interest payments are lowered, but that only matters if there are going to be future interest payments.

If you have a system that works and doesn’t need any changes, the fact that it has tech debt doesn’t matter.

To carry the analogy forward, some mortgages have penalties for early payment. Paying off tech debt also has a penalty, usually in QA and system stability.

This is why my favorite time to pay off tech debt is just before a major feature is being added to indebted code. You are trading off the looming interest payments (which will balloon) and your penalty is already being incurred, because you need to QA that whole area again anyway.

AirTags Could Be Used for Precise Indoor Location

I don’t think there’s going to be an SDK for AirTags, and they seem to be designed to be found by a single person, but the same technology could be used to precisely locate myself indoors (if AirTags were installed to create a mesh).

This is supposedly what iBeacon’s do, but I’ve heard from people trying to deploy them that the technology doesn’t work very well. I don’t really know anything about this at all, but here’s a contrary view from someone who knows beacon tech better:

We don’t believe that these tags will replace the current generation of BLE beacons for a few reasons:

・ These UWB Tags will require a new (circa 2020) Apple or Samsung
They will not be compatible with most of the existing gateways
These tags will most likely initially only work with the proprietary applications on Apple or Samsung Phones
Apple and Samsung UWB seem to be geared towards finding lost items, not providing all of the other sensor data that current BLE beacons do
BLE Beacons will be much, much cheaper than these UWB Tags will be

And, this is probably true today with AirTags as they are. But, this article also says that Google is dropping support for BLE beacons, so there is some problem here.

What is Bicycle?

Bicycle is an open-source framework that I’ve been working on with a couple of friends. One way to think about it is to compare it to a spreadsheet.

In a spreadsheet, we are building a directed, acyclical graph of cells, where cells are nodes and each formula in the cell defines the edges and direction.

If A1=B1+C1, then both B1 and C1 point to A1. The graph cannot contain cycles, so, in a spreadsheet, you can’t then say that B1=A1-C1 even though that is true, because it would cause a cycle in the graph.

Bicycle defines a data-structure and algorithm that gives meaning to a graph of formulas that does contain cycles where dependencies between nodes can be bidirectional (hence, Bicycle).

In Bicycle, you can define both of the formulas above and also complete the network with A1=C1-B1. In more complex networks, an individual field may have several different formulas, using different dependencies that can set its value.

Once you define a network, you can seed it with values. These are kept outside of the formula data-structure in a kind of priority queue. The highest priority values are seeded first, and each value is only accepted into the network if the network can remain consistent.

Meaning, I could define A1 as 2, B1 as 3, but if I then say C1 is 7, I have created an inconsistency. When you attach this to a UI, you would want the oldest value to be discarded.

We are also providing some help building SwiftUI based UIs with it. The network is meant to be hosted in an @ObservableObject and we provide a TextField Initializer that will bind to fields in it. Here’s a demo of a network that can convert between yards, feet, and inches.

Try to imagine replicating that in Excel. You’d have to pick one of the fields to be user-provided and put formulas in the other two. In Bicycle, you can provide as many formulas as you want (as long as they are consistent with each other) and seed-values are used as long as they are consistent.

It’s in very early stages and the API will probably change a lot, but if you want to take a look, see SwiftBicycle in GitHub.

Write While True Episode 6: Editing First Drafts

I was first exposed to this idea at The Business of Software conference in 2017. Joanna Wiebe gave a talk about copywriting for SaaS businesses. She’s an advertising copy writer, and the talk is mostly about that. It’s worth watching the whole thing, but near the end, she said something that astonished me.

Transcript

In Defense of Tech Debt

I’m a fan of tech debt if used properly. Just like real debt, if you can pull some value forward and then invest that value so that it outgrows the debt considerably, it’s good.

Mortgages and tuition-debt can possibly do this. Credit card consumption debt does not. If your tech debt looks like the former, do it.

For example, if can can close a big enterprise deal with some tech debt, and the alternative is another round of VC to “do it right”, I think it’s obvious to hack away. When you close that deal, your valuation goes up. Maybe you don’t even need to raise.

The decision depends on the specifics. Tech debt isn’t “bad”, it’s a cost. Calculate the cost.

It can be worth it.