Moore’s Law of Baseball

For almost my entire life (and before that all the way back to the dawn of baseball), the stats on the back of a baseball card were unchanged. If you got the box scores for your favorite player, you could calculate their stats yourself with a pencil. That’s not necessarily good. These stats were simple and misleading.

For example, it was clear in the 90’s that on-base percentage was more important than batting average. This got expanded on in the money ball era. Computers were brought in to analyze players, and so analyzing players was now subject to Moore’s Law, which can be simplified to say that we double computer power every 18 months. We’ve had about 20 doublings since then.

What the Moore’s Law of baseball? The number of stats is doubling every 18 months, all enabled by modern compute power.

There’s a stat called WAR or Wins Above Replacement, which tries to tell you how many wins a player adds to their team relative to the average player at their position (who has a WAR of 0). To calculate WAR for a single player you need every outcome from every player. It’s so complex, that we can’t agree on the right way to do it, so we have a dozen variants on it.

Stats like Exit Velocity, Launch Angle, Spin Rates, Pitch Tunneling, and Framing are only possible to know because of high-speed cameras and advanced vision processing enabled by Moore’s law. We’re not limited to describing what has happened already—some broadcasts put pitch-by-pitch outcome predictions on the screen.

Even with all this advancement, it still sometimes feels like we’re still at the dawn of this era. As a fan, these don’t feel like the right stats either. No one will be put in the hall of fame because they hit the ball hard a lot of times.

Just need a few more doublings, I guess.

LUIs Give You User Intent

Language User Interfaces (LUIs) are driven by natural language prompts which an LLM can use to drive your command-based application. Even if the LUI makes mistakes, the prompts are a treasure trove of user intent.

Right now, we broadly have two ways to get user data: Analytics and User Research. Analytics are easy to scale and are useful, but they cannot give you user intent. They can tell you what the user did, but not why. User research is targeted right at uncovering causal and intent data, but it’s hard to scale.

A LUI gives you the best of both worlds because it asks the user to express what they want in their own words and can easily be deployed to all users.

As an example, consider a dashboard configuration GUI for a B2B SaaS app. Almost every enterprise application has something like this—in this case, let’s consider Salesforce.

Using a GUI, a user might tap on “New Dashboard” and then “Add bar chart” and then use some filters to set it up. And then, they “Add pie chart” and set that up. They put in another chart, then quickly delete it. They add, delete, reorder, and configure for an hour until they seem to be satisfied. In an analytics dataset, you’d have rows for all of these actions. You would have no idea what the user was trying to do.

In a LUI, the user might start with “I have a 1:1 with my manager on Thursday. What are some of the things I excel at that would be good to highlight”. “Ok, make a dashboard showing my demo-to-close ratio and my pipeline velocity”. “Add in standard personal sales data that a sales manager would expect”.

This is something you could find out in user research, but it’s quite expensive to get that data. Some kind of LUI, even if it wasn’t great, would start to help you collect that data at scale.

You might found out a new Job to be Done (1:1 meetings with sales managers) that you could directly support.

ChatGPT Can Add a LUI to a Terminal Command based UI

Before GUIs took off, there were more command-driven applications. The program would respond with text answers like a specialized chat. As I said yesterday, this is not a Language User Interface (LUI), which would use natural language, not specialized commands.

One benefit that command driven systems have over modern GUI systems is that ChatGPT could probably drive them. Large language models seem to have no problem learning programming languages, even niche ones. You can even teach them new ones in your prompt.

To take advantage of this, we should be adding very well specified mini-languages to our applications to help our users get help from ChatGPT. Here’s a simple example based on a fictional airline flights query language I made up:

In your application, you would offer a command window that already has primed ChatGPT with the specification of the language and many examples. I barely had to do anything to get it learn this simple command. I went on in the chat to ask for more complicated queries, even a series of queries to find out about connecting flights, and it had no problems.

In your application, you only need to parse the terminal like commands, which is a lot easier than implementing a natural language parser, even for a constrained topic like airline booking.

I’m sure ChatGPT could build the command parser for you too if you wanted.

LUI LUI

I go by Lou, but my entire family calls me Louie, so I smiled when I found out that there is such a thing called a Language User Interface that uses natural language to drive an application and that it was called a LUI.

In a LUI, you use natural language. So this is not the same as a keyword search or a terminal style UI that uses simple commands like the SABRE airline booking system.

In this video, it output responses on a printer. But the display terminal version was not that different. I worked on software that interfaced with it in 1992, and this 1960’s version is very recognizable to me.

But, this is not a LUI. A LUI does not make you remember a list of accepted commands and their parameters. You give it requests in just the way you would a person, with regular language.

In SABRE, a command might look like this:

    113JUNORDLGA5P

But, in a SABRE LUI, you’d say “What flights are leaving Chicago O’Hare for Laguardia at 5pm today?” which may be more learnable, but a trained airline representative would be a lot faster with the arcane commands.

With a more advanced version that understood “Rebook Lou Franco from his flight from here to New Orleans to NYC instead” that uses many underlying queries and commands (and understands context), the LUI would also be a lot faster.

This would have seemed far-fetched, but with ChatGPT and other LLM systems, it feels very much within reach today.

On The Vision Pro’s Price

Apple has clearly decided that low-price or even affordability was not important at all for the Vision Pro.

They have a history of this, and it has been a disaster. The Lisa was almost $30k in current dollars (listed at $10k in 1983). They tried to do better with the Mac, but it launched at the equivalent of $7k the next year. The cost for making this tradeoff in 1984 was the loss of almost the entire PC market to Microsoft and Intel and was an existential problem for Apple until they brought Steve Jobs back.

Apple could never shake the perception that they were overpriced. In 2007, Steve Jobs tried to frame Apple prices as competitive with comparable products and said that they don’t ship junk to compete with the low-end.

Our goal is to make products we are proud to sell and would recommend to our families and friends. And we want to do that at the lowest prices we can. […] What you’ll find is our products are usually not premium priced. […] The difference is we don’t offer stripped down, lousy products.

2007 was the same year that the iPhone came out at prices many times that as most cell phones. Even now, the iPhone competes well in a market with a big low-end (of arguably junk). The effect on Apple was quite different from the Lisa and Mac. The iPhone built Apple into a $3T company.

So, is Vision Pro like the Lisa and way over what the market will bear for the category, or is it like an iPhone that redefines the category around a high end?

My gut is that it’s like the watch, iPad, or AirPods. A great, multi-billion dollar business that will lead the category, but not something that drives the entire business.

Write While True Episode 21: Dedicated Journals

I’ve been using a paper journal for years. Even when I worked on big teams, I still kept a separate journal with my personal daily tasks and schedule. For years, I just used a single journal for everything. I’d just go through it and then start another one when I hit the last page. Any kind of paper capture that I needed to do was in that one journal.

A couple of years ago I started splitting out separate journals based on the purpose.

Transcript

My Typing Teacher was a Genius

When I was in middle school, typing was a required subject. I don’t really know why.

In the early eighties it was not common for people to type at work. There were still specialists for that. Even in the late eighties when I worked in an accounting office and there were secretaries that took dictation and typed up memos. Computer spreadsheets existed, but the accountants there still used pencil and paper and secretaries typed them up if they needed look more formal.

This was the world my typing teacher, Mrs. Cohen, grew up in and probably worked in before becoming a teacher. I think, that deep down, she knew that we wouldn’t find typing relevant, and honestly, the class didn’t take it that seriously.

But one day, she read us an article from the local paper that said that kids needed to learn how to type because computers were going to be a big thing and soon everyone would need to know how to type. It had a huge impact on me—I still remember it very clearly.

I had already been exposed to programming and even had a computer at home. But, coding was just for fun. I didn’t think it would be a job, or that I would be typing every day at work. Mrs. Cohen was the first person that made me think that computers would be more than a toy.

Vision Pro Accessory Ideas

Since the failure mode of the Vision Pro is blindness, which can happen if the magnetic battery cable detaches or if you run out of power, it would have been nice if the headset had some onboard battery for a grace period.

There will be 3rd party batteries with more power. It would be good if they can also support swapping charged batteries in and out without losing power.

Here’s an idea for an accessory: A thin, disk-shaped battery that attaches to the headset magnetically (perhaps also with a strap) that you attach the included battery to. It has enough power for a 1 to 2 minute grace period, and keeps itself charged from the main battery. This is meant to help you swap batteries or just in case the cable detaches.

It’s important that it be light because it will be on the headset all the time. It should also look good.

The strap alone might be a good accessory. If there’s a way to make the magnetic cable more secure, I think I would want that.

The Failure Mode of the Vision Pro is Blindness

If the Vision Pro crashes, runs out of battery, or its magnetic battery cable detaches, you will be immediately plunged into darkness.

This means that the Vision Pro is really unsuited to be worn while moving. Walking around your house or job will probably be ok, but walking around outside isn’t. Luckily the headset looks too goofy to attempt that.

I had really hoped that this could work as a fitness device, but even anything above a jog on a treadmill seems dangerous. A slow walk would be fine. My initial reaction was that I would like to wear it on a rower. That would be ok too, because if it turns off, you aren’t going to fall down.

I am really afraid that someone will attempt to drive with this on. I hope that Apple adds a way to detect this and warn against it (or disable itself with a warning). It should certainly not approve apps that are meant to be used while driving.

If the Vision Pro is just a really good monitor, then this is not really a problem. But it does feel like pass-through displays that block your vision without power aren’t the future of AR (unless they can become transparent).

Is Vision Pro Just a Really Good Monitor?

I just read Ben Thompson’s take on the Vision Pro, which is admittedly a gushing, glowing, overly optimistic take. But …. he’s actually tried one, so I am taking it seriously. One worry I had was whether the displays actually matched the demo, and it does seem that they do.

His conclusion is that the Vision Pro might be in the same product category as a Mac and if that’s true, the $3499 price isn’t that bad. I absolutely could see a world where you use this instead of a laptop, but probably not on day one because it won’t have the apps I need as a developer.

Even so, compared to a laptop, the biggest downside is travel—I value how thin and light my MacBook Air is, and this is certainly not thin. I can’t easily stick it in a backpack. I also can’t see using this in a café or shared work space.

But, that had me thinking that maybe it’s not a laptop replacement, but an external monitor replacement. I have been eyeing the Studio Display at $1599 and also the new Dell 6K displays at $3200. If a Vision Pro is a better display than those, I don’t need it do much more.

It does make me think I should definitely not just upgrade my monitor yet.