Extending GraphQL Code Generation (Part II)

In the last post I said that even though I had used code generation throughout my career, the expressiveness of modern languages had reduced my usage. But, I had found that connecting code to external systems was better if I generated a type-safe API layer.

GraphQL Code Generation (which I mentioned in the last post) does this for GraphQL client code. You provide your queries and the endpoint to your server, and it will validate the queries and use the server’s schema to generate type-safe Typescript wrappers to use to execute those queries, including react hooks.

It doesn’t go far enough for me, but since it supports plugins, it was easy to add what I wanted. All I wanted was simpler, non-hook, type-safe functions to make GQL requests. The hook variants are very convenient when you are just populating a screen, but I had times when I wanted to make GQL requests and didn’t want to handle the result in the React lifecycle. This was even more true for mutations.

So, as a first step, I made a non-type-safe function to do any mutation given an Apollo client and document (NOTE: DocumentNode is a type that represents a GQL query).

export const gqlMutate = async (
  apolloClient: ApolloClient<object>,
  mutation: DocumentNode,
  responseProperty: string,
  variables: any,
  onSuccess: (response: SuccessResponse) => void,
  onError: (err: string) => void
): Promise<SuccessResponse> => {
  try {
    const response = await apolloClient.mutate({ mutation, variables });
    // handle response and call onSuccess or onError
  }
}

I could just stop here, but then when I write a call to gqlMutate, I get no code completion or type-checking.

But the GQL code generator actually generated concrete types for all of my GQL queries, so I could write a wrapper, like this:

export async function requestCreateOpening(
  apolloClient: ApolloClient<object>, 
  variables: gql.CreateOpeningMutationVariables, 
  onSuccess: (response: gql.SuccessResponse) => void, 
  onError: (err: string) => void): Promise<gql.SuccessResponse> {
  return await gqlMutate(
    apolloClient, gql.CreateOpeningDocument, 
    "createOpening", 
    variables, 
    onSuccess, 
    onError);
}

Which works great, but I have hundreds of mutations and I would have to write this code over and over and keep it up to date if anything changes. But, literally everything I need to know to write this is in the GQL query document. The code generator already generated this constant and type for me.

export type CreateOpeningMutationVariables = Exact<{
  teamId: Scalars['Int'];
  name: Scalars['String'];
  body: Scalars['String'];
  whatToShow: Scalars['String'];
}>;

export const CreateOpeningDocument = gql`
    mutation CreateOpening($teamId: Int!, $name: String!, $body: String!, $whatToShow: String!) {
  createOpening(
    teamId: $teamId
    name: $name
    body: $body
    whatToShow: $whatToShow
  ) {
    success
    error
  }
}
    `;

It’s not a stretch to get it to generate the wrapper around gqlMutate as well.

Normally, the drawback is having to parse the source language, in this case GQL. But GQL code generator already does that and already generates a lot of useful types for you. All you need to do is loop through your GQL documents (which are types) and write out the code. For details see the plugin documentation.

It’s too hard to explain my generator (write me if you need more help). But, if you are calling GQL with Apollo and typescript from your front-end, consider code generation. And, if you find yourself writing non-type-safe wrappers, know that you can extend it with plugins to make them type-safe as well.

Why I am Using Code Generation Again, Part I

In the nineties, I used Rational Rose and Perl to generate a lot of C++ code. I’m not sure if I knew the term at the time, but we also created DSLs (Domain Specific Languages) that probably fell into the “… poorly specified, half-implemented Lisp” trap.

Back then, I would say that I used code generation to overcome the limited expressiveness of my programming language. There was a lot of boilerplate in C++ (and later Java and C#) that I didn’t want to write—I didn’t even want it to be part of the source. I considered the DSL to be the source in these cases.

When I moved onto iOS development, I used code generation sparingly. It somewhat complicates the build process and the Swift language designers prioritized expressiveness enough to make it less necessary.

But, recently, I’ve begun to use code generation more while using Typescript. Typescript is a lot like Swift, so it’s expressive enough, but the main difference is that I am using Typescript to access things that don’t have types in their built-in API (SQL and GraphQL), but do have an underlying type system.

I decided to use TypeORM and TypeGraphQL, which allow me to define types that map onto the database and are passed around in the the GQL, and so on my backend, I have a strongly typed API to both. These systems are based on “decorators”, to tackle the problem of boilerplate code. But TypeORM generates the SQL to build the schema and each migration as I add to it, so there is code generation as well.

This is great for the backend. My server ends up exposing a GQL API which can also provide a schema. But, I still have the problem of having a typed API to GQL from my front-end. For this, I ended up using GraphQL Code Generator, which can connect to a GraphQL server, read the schema and your GQL queries, and generate a nice typed API, including React hooks for fetching your data.

Even with all of this, I didn’t think of myself as generating code. I was not writing parsers or generators—I was just using a system that happened to generate code. But having this system in my build flow already (and the fact that GQL code generator can have plugins) made it inevitable that I would eventually be generating my own code.

GQL Codegen is already doing all of the heavy lifting of getting the schema and parsing it—you can write a new plugin in a few lines of code. Once I wrote the plugin, I could delete hundreds of lines of boilerplate code around doing GQL mutations without having to resort to typeless wrappers. I’ll provide some details in Part II.

Seeing Code

There’s a saying that code is written to be read by programmers and incidentally to be run by a computer. I would argue that that doesn’t go far enough. The goal shouldn’t be readability, it should be to make it so you don’t have to read it.

When I have been working with a codebase for a while, I don’t need to read the code all of the time. I see it. I’m not perceiving it serially as lines of code—it’s more like seeing it all at once.

Some of this comes from familiarity. It’s in my head already. The code itself is a reminder. It’s something that comes from flow and extended time with the code.

But, it’s mostly a function of the code itself. Some code is easier to perceive this way.

What helps me is if the code is made up of memorable and predictable chunks. In a seeable codebase, I read one file and just know how the others would look without having to read them. When I open another one, it looks how I pictured it.

This idea is related to System Metaphor from XP

The system metaphor is a story that everyone – customers, programmers, and managers – can tell about how the system works. It’s a naming concept for classes and methods that should make it easy for a team member to guess the functionality of a particular class/method, from its name only.

and Convention over Configuration from Ruby on Rails.

The same goes even when you understand how all the pieces go together. When there’s an obvious next step for every change, we can scoot through the many parts of an application that is the same or very similar to all the other applications that went before it. A place for everything and everything in its place. Constraints liberate even the most able minds.

and the Principle of Least Astonishment

The principle aims to leverage the existing knowledge of users to minimize the learning curve […]. In more abstract settings like an API, the expectation that function or method names intuitively match their behavior is another example. This practice also involves the application of sensible defaults.

Astonishment is perhaps the best way to perceive code that resists being seen. It may be perfectly readable and understandable, but it should also already be predictable. If it surprises you, then you should ask yourself why.

The Unreasonable Effectiveness of Mathematics in Programming

(with apologies to Eugene Wigner)

My college didn’t have a CS major, but they let me put together one under a general engineering program. To fill up the requirements, I had to take a lot of math, which has been more useful than I expected a programmer.

I didn’t seek out jobs that required a lot of math. I optimized my job search around small companies doing product development for B2B, and didn’t care much about the specific technology they used. But, I was comfortable with the math, so it made my life easier.

The first eight years of my career was in FinTech, and the software I wrote was a nice UI around a lot of math. The core concept was options pricing (probability and statistics) and the sensitivity of that price to its inputs (calculus and differential equations). To do risk analysis, you have to build up huge matrices (linear algebra) for various purposes. Our company employed mathematicians, so we didn’t do the research, but we had to understand it to work on those parts.

Later, I contributed to a patent related in spreadsheets where graph theory was important. I also implemented numerical differentiation and root-finding algorithms as a way to run expressions backwards (numerical analysis and calculus). That patent expired, so I am reimplementing it in Swift and Typescript.

In 2005, I did a consulting project to implement a distributed monte-cargo engine for a decision support system. I would not have won this bid if I did not understand the math behind the engine.

From 2006-2013, I worked at an image processing tools vendor. This job was the closest to pure CS that I have had, and there was a lot of math, specifically linear algebra, but also some numerical analysis.

Every front-end position I have had uses at least a little linear algebra (for affine transformation). It’s not like you are doing the matrix multiplication yourself, but you’ll understand the more complex compositions better if you understand them. For example, if you know that matrix multiplication is not communicative, you’ll get why the order of the transformations matters.

Nearly every programming job now requires you to understand the analytics data that the software generates and to do statistical analysis on it. Forming a hypothesis and getting evidence to support or reject it is essential. At a bigger company, you might have a data-science team to do the heavy lifting, but it really helps if you can do it yourself—you also want to be able to read and understand their reports.

If you really want to go deep into the type theory behind type-safe languages (like Swift or Typescript), you have to understand set theory and maybe even HoTT. You don’t need it to use these languages, but if you had interest in compiler theory or implementing a language like this, it would help. Understanding set theory also helps with understanding relational databases.

When I was trying to find a Swift implementation of numpy a few years ago, I ended up finding Surge and contributed Eigen decomposition to it. I had to relearn it, but I would not have even tried if I hadn’t touched linear algebra since college.

Games are essentially physics simulators, which are ultimately math engines. I only write simple games as a hobby, but even for pong, I had to write vector math functions.

And, although I think my career has used a somewhat typical amount of math, there are programming jobs that require a lot of math. A deep neural network’s guts are a calculus and linear algebra engine (see this video from 3blue1brown). As I mentioned, data science makes heavy use of probability and statistics. I learned in Quantum Country that the “programming language” of a quantum computer is based on matrix multiplication with complex number entries. And while writing a game is a reasonable amount of math, writing a game engine is much more than that—and as more things (the blockchain, games, machine learning) have used the GPU for computing, the way you think of implementing solutions in those domains is more mathematical.

To be clear, you can do a lot of programming without understanding any of the math behind it. But, I have found it more enjoyable to go a little deeper, and it makes it easier to develop intuition about the higher level concepts.

In (weak) Defense of Algorithmic Tech Interviews

Yesterday, I wrote about my preference for work simulation tech interviews. There was one place I worked where we did more algorithmic tech interviews. Here’s why

  1. We were hiring almost exclusively people that had just graduated from a CS program. They were actually pretty good at these kinds of questions and didn’t really know much else we could ask them.
  2. The company made an image processing toolkit. There was a lot of data-structure and algorithmic code. There were a lot of optimization tasks. You really did need to know big-O for your code because all images had millions of pixels.
  3. All of our code was in C# and basically no one knew that already. So we hired people that could program in any language and needed questions that worked for anything.

So, the questions I asked were not too far removed from the kinds of problems we did need to solve, but they were technically algorithm/data-structure questions. At that job, with that codebase and with the strategy to hire new CS grads, I would do the same thing.

How to do a Work Simulation Tech Interview

I have written about how tech interviews are too much like auditions.

If you want to be in an orchestra, you need to audition. That makes sense because an audition is close to what the job actually is—a performance. This also makes sense for actors, comedians, dancers, etc.

A typical tech job is not a performance. For one, there is no audience. And, unlike a performance, we make tons of small mistakes and corrections along the way. Imagine a band performing a song like we usually program—it’d be a mess and not very entertaining (or very entertaining if you think of it as avant-garde).

But, I do think there is value in seeing a candidate actually do the job you are hiring them to do, so what I have tried to do is to simulate work as much as possible.

Here is what I recommend:

  • Describe it as pair programming. Unlike performances, this is something programmers actually do. Tell them that they are driving, and that you will be a helpful pair. This means that if you correct something, that that isn’t bad—it’s expected. You should correct things you don’t care about (and let them find the things that you do care about). Set up an expectation that mistakes are are ok.
  • Start with working code, not a blank slate. Send them a small working project that is like your actual code. For me, that was a working iOS app with one screen (all code was in one file). Run the code and show them what it does. If you don’t require a specific language/framework, then this is harder, but perhaps you could prepare more than one. We had both Objective-C and Swift versions of the interview app when Swift was still new.
  • Tell them what to expect beforehand. Before the interview, tell them exactly what the interview is going to be. “I will send you a project that does XXX using (language, framework, etc), you need XXX to run it, and then we’re going to add small features and do other things to it.”
  • Ask a question that makes them read the code. The first question should be to do something extremely easy with the code that makes them have to read a good portion of it.
  • Set an expectation that the code they write is a first draft, not production. Tell the candidate that this is “interview code”, not real production code, and we don’t have time to do everything that they would do. But, ask them to tell you what they would do if they had more time. If you want to see them do it (i.e. it’s part of the interview), ask them to do it—otherwise, consider it done.
  • Answer any question that they would normally google. Tell them that you don’t expect them to have memorized everything, so if they would normally google something, to just ask you and you will tell them. This is just to speed things up.
  • Remember that coding is one-dimension of the candidate. Being able to code is important, but it’s not the only thing that will determine success at the job. Pay close attention to collaboration and preparation (since you told them what to expect) and other aspects you find important.
  • Don’t treat the coding interview as a gate. If you give a coding interview first and immediately reject people, then you are going to miss out on a lot of good candidates. Make the goal to rate them on a few dimensions and pass them if they could do the job. Then, later in the process, compare the candidates on all dimensions of the job that you tested.

The goal of the technical interview is to find out if the person can do the coding aspects of the job. So, to the extent that you are doing what the job requires, the better the interview is at answering that question.

So many choices, Part 2

Yesterday, I lamented that web development hadn’t come to any consensus on tooling. programming language, basic UI framework or anything else.

I was talking about front-end development, but it’s true on the back-end as well.

Here, I’m not so sure if we could ever get consensus. On the front-end everyone is running in a browser and ultimately has to provide HTML, CSS, and Javascript. I was hoping that would have driven the industry towards one obvious choice.

On the back-end, everyone has been using their tool of choice for decades, and using Javascript (via node) is a good choice now, but it’s not close to universal. And if you are only working on the backend there isn’t a requirement to know Javascript at all, so there are lots of teams that don’t care about matching languages.

For me, using one language on both the front-end and back-end is such a compelling benefit, that I’ve moved to node for anything new. In practice, I haven’t had much code sharing, but I am flipping back and forth so much while developing, that just having to have one language in my brain is reason enough.

And since I use GraphQL, again Apollo is an obvious choice.

Before this, I was using Django and Python for my backends, and I like working with an ORM for SQL data, so I decided on TypeORM. Here there seemed to be 1 or 2 other viable options as well. TypeORM can be DB engine agnostic (depending on what you use), so setting up an in-memory, sqlite version for unit testing is pretty easy.

With Apollo and TypeORM, TypeGraphQL is a nice addition. It lets me have a DSL that describes entities in one place that generates my DDL, queries and also encodes them for GQL/Apollo automatically. If you don’t do this, a lot of your GQL code isn’t going to be type-checked.

My backend is essentially just a DB with GQL in front of it right now, so there isn’t much else to decide. Since I use the same language as my front-end, I can use eslint, prettier, and jest here as well.

The one thing I miss from Django is the automatic admin console and user management. In the end, I do think my admin interface needs to be more custom, but it’s useful at the start until you get around to making a real one. I looked at AdminJS, but unfortunately I had already committed to some TypeORM choices that it couldn’t support. If I thought I’d be on AdminJS long-term, I might have backed out of those choices.

In any case, the choices on the back-end mostly depend on what language you choose. As an iOS developer, I never really had the option of using one language for the full stack—I usually want to make native apps and Swift isn’t ready for the server-side in my view. It’s nice to have this option, and would make me reconsider ReactNative if I need a mobile app for this project.

So many choices

Up until about March this year, I was primarily an iOS developer and had been for a while. But, this year I decided that the things I want to build would be best built on the web, and so I started to learn modern web development. I was struck by the lack of consensus about what that means.

My goal is to build a real web app, not just to learn. I already know HTML/CSS/JS going into this. If your goal is to get started or just to learn, I’d start with HTML/CSS/JS and maybe React or Vue and not worry too much about the rest yet.

Right now, I’d say, if you want to be an iOS developer, you download Xcode, write in Swift, use UIKit for the UI and standard components and libraries and you can build professional apps. There’s built-in support for most things that you need, and the Apple implementations are excellent. All of the apps I have worked on are not much more than this.

On the web, I’d say the one consensus choice is to use VSCode, but then it starts bifurcating quickly after that.

The default language of the web is Javascript, but you can use anything that transpiles to it. I have decided on Typescript, but I think Clojurescript is a reasonable choice. There are probably a few others.

At this point, you basically have nothing. The browser provides an HTML renderer, very basic components, and a limited library. You are going to need to add dependencies to do anything beyond the basics (you had to do this even to get Typescript).

The first choice I had to make was what UI framework to use. I chose React, but people I trust also use Vue. There are few others that seem reasonable, but React seemed a good fit for me. By the way, if you want to get started with React, I recommend Pure React, which is a book explains React in isolation.

It might be overkill, but I decided to use Redux to manage state. React has something built in now, but I like things that have been around a while.

I also needed a design system — something with ready-made components. I didn’t find anything that great, but chose Material-UI in the end as it seemed the best supported.

I wanted to like Microsoft’s Fluid, but I couldn’t get it to look right for me. I also could not get Ant to work at all (I’m a newb at this). Every big company seems to promote one of these — my old employer, Atlassian, has ADG, which looks ok. SalesForce has one too. But, they all seem to have a different idea about what a reasonable amount of markup is for default usage. Material-UI is usually a simple tag with a few attributes to get something on the screen — these other systems are way more complex. There is also a vibrant commercial market for components. I will probably revisit this after my MVP is done.

I decided on GraphQL as my API basis, and so I got another easy choice: Apollo. I also added GraphQL codegen because it can turn GQL into a Typescript API (with typechecking).

It doesn’t stop there. I have eslint, prettier, and jest as my code quality tools. I think those are default choices, but there were others. In iOS, code formatting and unit testing are built in.

And this is all just to get something minimal working. I have also had to add in some other components for specific features.

I have also checked in with people in my network that do this for a living, and I haven’t found two that have made the same choices. Except for VSCode — that does seem universal.

Tomorrow, I’ll talk about the backend, but there is no comparison with iOS, since you’d have to decide on this as well for a server backed app.

Socially acceptable cameras in AR

At some point, there’s going to be an AR device that looks exactly like glasses. I also hope that they don’t have cameras. If they have a camera, then they also need an indicator that it’s on. Walking around, for example, NYC with people wearing the glasses equivalent of 2000’s style iPod white earbuds is probably creepy. There’s no way around it if they also have lights and cameras.

But AR needs reality to augment, and a lot of reality is percieved visually. So, without a camera, these glasses will be a lot less useful. If we look at Apple’s AR features as a guide to how it feels about cameras, you can see that they are pro-camera. At WWDC 2021, they demoed a feature where you can orient yourself in a city by pointing your camera at the surrounding buildings. This would be very useful in a heads up display.

So, we will probably have the front-end of a camera to take in visual information. But, AR could still be very useful without ever producing a visual from the hardware.

One obvious (and currently available) representation is a depth map. Apple is testing out LIDAR on iPads and use a depth sensor on iPhones for face detection. The representation is a mesh, not an image, so it might be acceptable and is useful for a lot of AR.

Another thing they could do is pre-process the video feed into a given set of layers in a neural network. The first few layers in an image processor usually do some down-sampling and feature extraction. I’m not an expert, but if these things cannot be reversed back into the original image, they might be acceptable. In current feature detectors, you can retrieve some bits of images (see this), so there’s some work to do. But even if this is ok, it’s a big public education project to get it to be socially acceptable.

But Apple has shown some willingness to talk about its privacy protecting “provably cryptographically safe” technologies and open them up to third-parties for verification, so maybe they’d be willing to go this route to get a “camera” into their AR glasses.

Apple Fall 2021 Event Wishlist

I’ve done a bunch of WWDC wishlists (e.g. 2021, 2020, 2019), but I haven’t done one for the main hardware event, which is this Tuesday.

I’m sure that the iPhone, Apple Watch, and maybe even the iPad (or Macs) will get nice improvements, but I can’t think of anything more I’d want. I am on the iPhone upgrade program, so I’ll end up with a new phone regardless. And the trade-in value on watches usually makes updating a reasonable option.

So, the main thing I’d hope for is something in AR. I’ve written about how I think AR could make apps more like games, and I do think that there’s space for a workout AR device. I would love to extend Sprint-o-Mat to make it feel like you’re in a race against the pace-runner. It would also be a good addition to Fitness+, which could extend to outdoor activities.

It feels inevitable that there will be something in AR eventually from Apple. I think one social issue is what to do about cameras on AR devices, which I will address tomorrow.