When I first wrote Robotic Pair Programmers I really was imagining that an AI code assistant would be more like a side channel and not inside autocomplete. To be fair, I was programming mostly in Xcode at the time and had not yet experienced VSCode, so my conception of how fast an IDE could be was antiquated.
I do think that the autocomplete integration is genius, but I have also been wanting an audio side-channel to my work that would be more like a human assistant. Think of it as a ChatGPT enabled Siri, but still fully integrated into my IDE.
Here’s a script of what that might look like (NOTE: the first draft of this script was generated by ChatGPT, which I am pointing out because I think it’s interesting in this case. I edited it to fit the article better.)
Programmer: “I want to do a binary search in Python. Can you help?”
AI assistant: “Sure. One way to start is to define a function called
binary_search
that takes in a sorted list and a target element. Inside the function, you can initialize variables calledleft
andright
to the beginning and end of the list, respectively.”Programmer: “Okay, I’ve defined the function. What’s the next step?”
AI assistant “Next, you can use a while loop to iterate as long as the left index is less than or equal to the right index. Inside the while loop, you can calculate the middle index by taking the average of the left and right indices.”
Programmer: “Got it. And if the element at the middle index is the target element. I can return the index. What do I do if the middle index isn’t the target element?”
AI assistant: “If the target element is less than the element at the middle index, you can set the right index to be the middle index minus one. If the target element is greater than the element at the middle index, you can set the left index to be the middle index plus one. This way, the while loop will continue until the target element is found or the left and right indices cross each other.”
I would expect that the assistant would make incorrect assumptions or mistakes and then the programmer would clarify.
More importantly, when the programmer is programming, the AI assistant will still be making suggestions via autocomplete, but now is much more aware of the goal and so we’d expect the suggestions to be better.
The much bigger win will be when the assistant doesn’t wait for my requests, but interrupts me to help me when I am doing something wrong. To continue the binary_search
example, if I set left
to the middle index (off by one) then the assistant would let me know my mistake via audio (like a human pair would).
Just like in Assistance Oriented Programming, I think the key is to get intent in Copilot as early as possible.
Addendum
This example is simple, but I generated lots of interesting scripts in ChatGPT where the programmer and assistant collaborated on
- Testing the binary search
- Doing quicksort together, but I asked ChatGPT to make the assistant make incorrect assumptions that get corrected.
- Building a burndown chart in a web based bug tracking program
They were all interesting, but I didn’t include these because the that isn’t the point of the article.