The Day I Learned About Personal Finance

When I was in my early twenties, I got a call from a financial advisor asking if we could have a meeting to discuss my finances. I would say at that point (compared to now) I knew very little, but I did save a lot, maxed out my 401k, and didn’t have any debt. But, aside from the 401k, my money was just in a bank.

I agreed to have coffee with him.

Over the phone he took a bunch of information from me. My age, my salary, my savings, etc, and then at the meeting, he brought a small spiral bound book with a personalized plan.

I can tell you right now, that that plan was probably not good. He, almost certainly, was not a fiduciary, and the mutual funds he wanted to put me in probably had high fees.

One page of that book, though, changed my life.

It was a line graph. The x-axis went from 1995 to 2055 or age 25 to 85. The y-value was my predicted net-worth. This was the result of using my expected salary growth, my savings rate, expense growth, my current net worth, inflation guesses, expected return, etc. You can find many such calculators on the web or do it yourself in Excel.

It was as you would imagine, an exponential growth curve that results from compound interest as long as returns and savings grow with respect to expenses and inflation.

The part that surprised me was that at 2035, when I would turn 65 and presumably retire, the curve had a noticeable notch, but still basically grew, but on a different exponential curve.

I asked how it could still go up after I retire, and he explained that at that point my investments would make more each year than I needed to spend, so they would keep growing.

I recreated the shape here.

In my mind, the red line was what I was going for—it’s not a bad outcome. He showed me that the blue line was not only possible, but actually, I was already headed there if I kept my savings rate. Most of the assumptions were fairly conservative. The only difference between those lines is expense growth (or, in other words, savings growth).

I honestly didn’t hear anything else he said that day. Look at what a difference it makes.

Assuming that you are not a stock-picking genius (spoiler alert: you’re not), and you are getting market returns from index funds, the only variable you control is savings rate. Of course, there are several components of savings rate, which I’ll talk about soon.

Waiting to Promote Until You Are Sure is Wage-Theft

If you wait to promote people who are “already doing the job”, I think you’ll find that you don’t retain your best people. And if you are hiring people into senior positions instead of promoting, you don’t deserve to.

Even if you do, it’s a kind of wage-theft to not pay people for the work they are doing. A few months is ok, a year is absolutely not if you have the positions. If you’ve already done this, consider a bonus at promotion time.

If you have the positions, but people don’t seem ready, that’s on you. You should have supported and developed them before they were needed. If the opportunity is there, you should take the chance and responsibility for their success.

Meaning, if it doesn’t work out, you were the failure. You failed to develop them in time, and you failed to support them so that they were successful.

Applying The ONE Thing

I recently read The One Thing by Keller and Papasan. I recommend it, but the current VP of The ONE Thing was on an episode of Afford Anything, and it’s a very good introduction to the concept, if you want something quicker to digest.

The crux is that the authors ask you to focus on one question:

What’s the ONE Thing you can do such that by doing it everything else will be easier or unnecessary?

In a “five-why” fashion, you could start with your big goals and work backwards until you have the very first thing you need to do. They recommend dedicating a huge chunk of your time to that one thing, accomplishing it, and then moving on to the next ONE thing … ONE thing at a time.

For example, you might start with a goal to “Lose 10 pounds”, but end up with a ONE Thing like — learn how to cook five healthy, low calorie density, one-pot, bulk meals.

Don’t stop there, though. They want you to keep going—what is the one thing that would make that easier? Maybe you can find a local chef to come to your house and make them with you for the first three weeks. Perhaps you can get four friends to each learn one and teach the others (or swap freezer bags each week).

When I finished the book, I thought that my current ONE thing was to build up the number of tutorials on App-o-Mat, so I have been doing that. The eventual goal with App-o-Mat is to sell some kind of product there, but before that I need an audience, and you can’t get an audience unless you have useful content.

But, I am supposed to keep digging.

What is the The ONE Thing I could do to make making tutorials on App-o-Mat easier. One thing that has helped is having Sprint-o-Mat, because I just take code out of it as a basis for the tutorial. If I didn’t have that, making the app might be important.

But, if I break it down to something like this: I want 100 more articles on App-o-Mat by the end of July, then I see that I can only write about 25 of them at my current pace. What would make that easier?

Obviously, more writers. But 75 articles would cost about $15,000. So, is the real next ONE thing to get $15,000 of revenue? Or at least $5,000 in May?

I’m still working through this. If App-o-Mat were a serious business, investing $15,000 would be a no-brainer, but honestly, it’s not. Right now, I want to have the experience of writing, and I don’t want to manage and edit a bunch of writers.

But, I do think that a trickle of articles by others would be ok, so I will explore that. I also think that if I doubled-down on the time I spend writing articles, I could do a lot more, which means dropping some other things or just getting more disciplined.

Keller would say to break down “getting more disciplined” first because he’s kind of annoying that way.

So, to do that, I am going to make my One thing to come up with article series ideas that could each have many articles in them. Because then, I’d at least have a checklist I could work from and could put them on a schedule. I think I’d see the possibility of outsourcing better as well.

Write While True Episode 8: Lower the Bar

This episode is about lowering the bar, and I’m tempted to just say to go do it and sign off.

I won’t do that, but I am going to keep the bar on this episode pretty low. Now, you’re probably thinking — Lou, I thought the bar for Write While True episodes was pretty low already.

Transcript

Gerry Sussman on Biological Systems

Yesterday, I lamented that our computer systems are so short-lived, as opposed to biological systems (like humans) which routinely live lives twice as long as the longest-lived computer system.

I want to be clear that I am talking about the uptime/runtime of a mostly static system, not something like UNIX that is constantly maintained (unless there’s a 3B2 in Bell Labs somewhere running UNIX from the 70’s processing payroll or something).

I was reminded about this talk from Gerry Sussman titled We Really Don’t Know How to Compute.

My main takeaway has to do with types and correctness. Basically, that they are a dead-end. They are very useful (I use them!), but correctness isn’t an interesting goal for a long-lived system.

Sussman brings up biology—and one point he stresses is adaptability.

If adaptability is a key to long-livedness, then type-systems and correctness appear to be in opposition to that. As do runtime assertions. Imagine if humans “crashed” if they got unexpected input. Or what if humans simply refused to “boot” if they had a minor gene “incorrectness” (admittedly, they do refuse to boot with major gene defects).

Here’s an example of adaptivity in humans: we take food as input. We were designed by evolution to use whole plants and maybe some meat as optimal fuel.

However, a modern human can live on processed food, much more meat and dairy, oils, refined sugar, and many things that did not exist when we were designed. We don’t crash immediately on that input or even reject it. We get fat, we develop heart-disease, diabetes, etc.

In other words, we get feedback that the input is bad—eventually the system will end earlier than it would have with better input, but there are many examples of long-lived humans that have never had perfect input.

Is the Human system “correct”? How would you use Domain Driven Design and types to describe the input to this system?

The reality is that the input is essentially infinite and unknown, and what matters more is the adaptability and feedback.

Gerry’s talk has something to say about what kind of programming language you need for this (Spoiler Alert: Scheme), but generally more dynamic, more data-driven languages will work better.

Long-lived Computational Systems

Last month, I turned 51, which isn’t that old for a human. I’ll hopefully live a lot longer, but even now, my uptime is better than any computational system.

The closest I can come to finding something as old as me is Voyager. If you take away Voyager, I don’t even know if 2nd place is more than ten years old. It’s probably a TV.

Unlike anything we make today, Voyager was designed from the beginning to have a long-life. The design brief said: “Don’t make engineering choices that could limit the lifetime of the space craft”. This led the engineers to make extensive use of redundancy and reconfigurability.

For more details, check out this presentation of the design of Voyager by Aaron Cummings.

In the 40+ (and still counting) years that Voyager has been running, having redundancies and being reconfigurable has extended its life and capabilities.

And it hasn’t hurt one bit that the software is not “type-safe”, “late-binding”, or “functional”. It doesn’t make use of a dependency framework, design patterns, or an IO monad. It is not declarative—it’s probably not even structural. None of these things contributed to its long life.

This is why I find many of the arguments over software development paradigms so boring. None of it has resulted in anything like Voyager, and even that hasn’t been replicated to even a minimal extent. In all of our big systems, the adaptability comes from the people still working on it. If we stopped maintaining these systems, they’d stop working.

The only upside is that our AI overlords will probably only run for a month or two before crashing on unexpected input.

April Blog Roundup

This month I realized that this blog is easier to keep up with if I just document my projects. I released episodes 4, 5, 6, and 7 of my podcast about writing. I also wrote articles about how I am self-hosting it, using S3 for the media, and how I get simple stats.

I also wrote an article about Bicycle, an open-source library I am working on with a couple of friends, and I’ve been writing articles about WatchKit on App-o-Mat. And this post itself is an example of just documenting my projects.

Most of the articles I write for this space are about software development and processes.

  • In Defense of Tech Debt encourages you to just think of tech debt as a cost which might be acceptable.
  • But, I think Tech Debt Happens to You most of the time because of dependencies.
  • And then in Timing Your Tech Debt Payments, I compare payments to servicing interest and paying down principal and offer a best time to do the latter.
  • In It’s Bad, Now What?, I talk about pre-planning actions for when monitoring shows problems
  • In Assume the Failure, I recommend framing all risks as your own failures, not external ones, so that you can personally mitigate them.
  • In Mitigating the Mitigations, I show how you can’t wait until a risk materializes to do your mitigation plan. It might be something you need to do somewhat in parallel.

And, I wrote some articles on gaining expertise

I have been thinking about the great works in software and software writing as I think about where I want to spend my time. I think there’s an interesting cycle of making -> tool making -> making with the tool, where the result is content in a new medium.

Getting Podcast Stats from S3 Web Access Logs

I self-host the Write While True podcast using the Blubrry PowerPress plugin for WordPress and storing the .mp3 files in S3.

One downside of self-hosting is that you don’t have an easy way to get stats. Luckily, podcast stats aren’t great anyway, so whatever I cobble together is honestly not that different from what a host can give you, The only way to do better is to do something privacy-impairing with the release notes (tracking pixels) or by building a popular podcast player—neither of which I’m not going to do.

So, I use S3 and set it up to store standard web-access logs, so I have a log of each time the .mp3 file was downloaded. The main thing you need to do is get the log files local, and then you can count up the downloads by filtering with grep and counting with wc.

To download the logs, I use the AWS CLI (command-line) tool. Once you install it, you need to authenticate it with your account (see docs). Then you can use:

aws s3 sync <BUCKET URL> <local folder>

To bring down the latest logs.

The first thing you might notice is that there are a lot of log files. Amazon seems to create a new file rather than ever need to append to an existing one. Each file only has a few log lines in it in a typical web access log format. I store them all in a folder called logs.

I name the .mp3 file of every episode of Write While True in a particular way, so those lines are easy to find with

grep 'GET /writewhiletrue.loufranco.com/mp3s/Ep-' logs/*

This is every download of the .mp3 files. There’s no way to know if the user actually listened to it, but this is the best you can do.

It does overcount my downloads though, so I use grep -v to filter out some of the lines

  1. Lines containing the IP address inside my house
  2. Lines that contain requests from the Podcast Feed Validator I use
  3. Lines that contain requests from the Blubrry plugin

The basic command is:

grep 'GET ...' logs/* | grep -v 'my ip address' | grep -v 'other filters' | wc -l

This will give you a count for all episodes, but if you want to do it by episode, you just need to grep for each episode number before counting.

I’ll probably script up something to create a graph with python and matplotlib at some point. If I do, I’ll post the code and blog about it.

Single Case Enums in Swift?

I watched Domain Modeling Made Functional by Scott Wlaschin yesterday where he shows how to do Domain Driven Design in F#. He has the slides and a link to his DDD book up on his site as well.

One thing that stood out to me was the pervasive use of single case choice types, which is a better choice than a typealias for modeling.

To see the issue with typealias, consider modeling some contact info. Here is a Swift version of the F# code he showed.

typealias Email = String

The problem comes when you want to model phone numbers, you might do

typealias PhoneNumber = String

The Swift type checker doesn’t distinguish between Email and PhoneNumber (or even Email and String). So, I could make functions that take an Email and pass a PhoneNumber.

I think I would have naturally just done this:

struct Email { let email: String }
struct PhoneNumber { let phoneNumber: String }

But, Scott says that F# programmers frequently use single case choice types, like this

enum Email { case email(String) }
enum PhoneNumber { case phoneNumber(String) }

And looking at it now, I can’t see much difference. In Swift, it is less convenient to deal with the enum as Swift won’t know that there is only one case, and so I need to switch against it or write code that looks like it can fail (if I use if case let). I could solve this by adding a computed variable to the enum though

var email: String {
  switch self {
    case .email(let email): return email
  }
}

And now they are both equivalently convenient to use, but that computed variable feels very silly.

What it might come down to is what kind of changes to the type you might be anticipating. In the struct case, it’s easier to add more state. In the enum case, it’s easier to add alternative representations.

For example, in Sprint-o-Mat, I modeled distance as an enum with:

public enum Distance {
    case mile(distance: Double)
}

I knew I was always going to add kilometers (which I eventually did), and this made sure that I would know every place I needed to change.

If I had done

public struct Distance {
    let distance: Double
    let units: DistanceUnits
}

I could not be 100% sure that I always checked units, since it would not be enforced.

So, in that case, knowing that the future change would be to add alternatives made choosing an enum more natural. This is also reflected in the case name being miles and not distance. It implies what the future change might be.

Even so, I don’t think single case enums will replace single field structs for me generally, but they are worth considering.