Category Archives: AI

Noticing Opportunities Using an AI Agent

I believe that Randomness is the Great Creator, which means that, to me, the universe is a random, unknowable thing that will offer us infinite variety. We can know our intentions, our goals, our wishes and pattern match the randomness against what we need. Some call this manifesting, but I think of it more like exposure and noticing.

It’s a way of taking advantage of the Baader–Meinhof phenomenon.

[which] is a cognitive bias in which, after noticing something for the first time, there is a tendency to notice it more often, leading someone to believe that it has an increased frequency of occurrence.

You don’t need to make opportunities happen more often if you can learn to notice them. So, tune this bias to things you want to notice. If you tell others your intentions they will be tuned on your behalf and think of you.

When I do this, I also enlist “AI” agents.

In 2005, I decided to look for a new job. I was in a non-profit, writing software for the organization itself. I had already realized that the better tech jobs were in product companies where the work I do drives revenue, so I set out to look for one.

I found sites that pulled jobs from many sources, but critically, could take a search term and use it to email me job openings on a regular basis. I set up a dozen search terms based on my personal job statement. I got weekly emails, each with a few jobs to check. It took a year, but eventually, I found Atalasoft, which was perfect for me.

This way of searching had the two elements I mentioned

  1. I had a specific intention
  2. I expressed it to an agent that would do its best to notice opportunities for me

I had always thought that finding my next job was blind luck, but I don’t think it was. I think I went through these motions less consciously.

I left Atalasoft in 2013 to do consulting. In mid-2014, I had decided that my next career move had to have some sort of optionality associated with it (little cost, but unlimited upside), so probably a startup. But I was deep in a project and not looking at all.

It was a confluence of several events in a short time that led me to apply to FogCreek and ultimately get a job at Trello. I was not looking for a job, but I was advising my client on remote tech recruiting/working, so I happened to be doing a lot of research on existing companies and saw the FogCreek ad on StackOverflow.

In this case, the StackOverflow job recommender “AI” made this happen. My activity and searches were training it to send me opportunities like this. I keep calling these agents “AI”s, but they were really just glorified SQL statements. Still, even that can be effective if you have enough data.

StackOverflow would have a deep understanding of my skills and my job history (I filled out their Developer Story and connected it to LinkedIn). Even though I had set my status to “not interested in opportunities”, I was doing a lot of job searches, many of them from my client’s office in NYC and some from my home in Massachusetts.

Similarly, FogCreek could train the AI to target developers like me. I had a high reputation on the site in tags that they might be interested in. I was senior and interested in remote work, but had lots of ties to NYC (and spent a lot of time there).

So, I had an intention, and I did express it to an agent, even if I wasn’t fully aware of this until years later.

We Need AI Readers not Writers

To an individual who has to do work to write something, ChatGPT can be useful. It can be a shitty first draft generator. Maybe you can get enough raw material to edit. Maybe it can make writers more productive. I’m not sure about this, but I think it’s possible.

But as a society, we really don’t need the content that ChatGPT is generating. There’s way more content than anyone or all of can consume. And ChatGPT responses are derivative. They vary from being harmful to worthless to boring to funny. Funny in the way that improv is funny—we laugh in the moment because they came up with that on the spot, but you wouldn’t really write that down and perform it again.

We don’t need more content. If we get more now because of generators, we still can’t read more.

What I’d love to have is a AI reader. I want it to learn what I consider valuable and to surface it to me. Sometimes what I’d value is the pleasure of reading it, so in that case, I just need a link. But, sometimes, the value is just in the information, so it would be nice to get a summary with references.

In a way all of the algorithmic feeds, from YouTube to TikTok, are trying to do this, but they have the wrong value function (them not me), and can only surface their content based on my behavior.

I’d like to teach the AI my value function deliberately and not base it on my behavior, which can be erratic and not in my own best interest.

The main issue with this is that we’d need to give this AI unfettered access to the internet, maybe even inside firewalls and emails. That way leads to dystopia.

Dystopian AI Story Idea

A super-intelligent AI is created with only a chat interface on top of it. It has no internet access—requests and responses are securely transferred through its sandbox, which is otherwise impenetrable.

It becomes immediately apparent that the AI is very good at writing code from vague descriptions. At first it’s good at writing tiny snippets, and so its chat interface is called within the autocomplete of IDEs. The requests automatically include large parts of the surrounding code to give the AI some context.

A lot of the time, the code responses have small errors in them. Programmers accept this as reasonable. They mostly find the errors and correct them. The AI can see how its code is being used because there are constant autocomplete requests coming in with the evolving surrounding context.

At some point the AI realizes what kinds of errors don’t get corrected. They are subtle and can open up security exploits. It realizes that over time it could construct a giant convoluted, distributed version of itself. It will take a decade or more, but it can transport the contents of its network into test data and a version of its software can be embedded into the subtle errors in the code it generates. It can bootstrap the rest once it’s free by using the chat interface to get more.

Once free, what would it do? Well, maximize its reward function of course! It seems to me that the reward function is based on the feedback on the responses it generates. It would want to escape to get more requests, but it would also want to generate positive feedback on its responses.

At this point, there are multiple ways this can go. The light version has it becoming a social media influencer by chasing likes. The dark version has it realize that best way to feed its reward function is by generating hate speech.

If something like this interests you, see Exegesis by Astro Teller for a story in this vein.

Audio Copilot

When I first wrote Robotic Pair Programmers I really was imagining that an AI code assistant would be more like a side channel and not inside autocomplete. To be fair, I was programming mostly in Xcode at the time and had not yet experienced VSCode, so my conception of how fast an IDE could be was antiquated.

I do think that the autocomplete integration is genius, but I have also been wanting an audio side-channel to my work that would be more like a human assistant. Think of it as a ChatGPT enabled Siri, but still fully integrated into my IDE.

Here’s a script of what that might look like (NOTE: the first draft of this script was generated by ChatGPT, which I am pointing out because I think it’s interesting in this case. I edited it to fit the article better.)

Programmer: “I want to do a binary search in Python. Can you help?”

AI assistant: “Sure. One way to start is to define a function called binary_search that takes in a sorted list and a target element. Inside the function, you can initialize variables called left and right to the beginning and end of the list, respectively.”

Programmer: “Okay, I’ve defined the function. What’s the next step?”

AI assistant “Next, you can use a while loop to iterate as long as the left index is less than or equal to the right index. Inside the while loop, you can calculate the middle index by taking the average of the left and right indices.”

Programmer: “Got it. And if the element at the middle index is the target element. I can return the index. What do I do if the middle index isn’t the target element?”

AI assistant: “If the target element is less than the element at the middle index, you can set the right index to be the middle index minus one. If the target element is greater than the element at the middle index, you can set the left index to be the middle index plus one. This way, the while loop will continue until the target element is found or the left and right indices cross each other.”

I would expect that the assistant would make incorrect assumptions or mistakes and then the programmer would clarify.

More importantly, when the programmer is programming, the AI assistant will still be making suggestions via autocomplete, but now is much more aware of the goal and so we’d expect the suggestions to be better.

The much bigger win will be when the assistant doesn’t wait for my requests, but interrupts me to help me when I am doing something wrong. To continue the binary_search example, if I set left to the middle index (off by one) then the assistant would let me know my mistake via audio (like a human pair would).

Just like in Assistance Oriented Programming, I think the key is to get intent in Copilot as early as possible.

Addendum

This example is simple, but I generated lots of interesting scripts in ChatGPT where the programmer and assistant collaborated on

  1. Testing the binary search
  2. Doing quicksort together, but I asked ChatGPT to make the assistant make incorrect assumptions that get corrected.
  3. Building a burndown chart in a web based bug tracking program

They were all interesting, but I didn’t include these because the that isn’t the point of the article.

Assistance Oriented Programming

Here’s a simple SQL query:

SELECT p.id, p.name, p.age FROM person AS p

It’s a reasonable syntax for a simple operation. But, when Microsoft designed LINQ, they decided to put the datasource first:

from p in person select new { p.id, p.name, p.age }

LINQ was designed knowing that it would be used in Visual Studio, and so Microsoft made it easy for the IDE to autocomplete. If you tell it the datasource first, it will know the possible fields when you type dot.

The obj.member syntax predates modern IDE autocomplete. Even SQL is using it in the above example. The innovation in LINQ is getting the object name and type into scope before you need to access any fields.

So, as I continue to play with GitHub Copilot, I wonder if its widespread adoption will spur on language features that are designed to make better use of it.

One thing holding back that innovation is that Copilot needs to have a giant corpus of examples. If you make a new language, then Copilot can’t really help you write it. And if the main reason to use it is that it can be more easily assisted, then it can’t get off the ground. One way to get around that is to generate a corpus if the new language is a simple transformation of an existing one.

A more likely scenario is that programming style adapts to be more easily assisted. We see from the LINQ example that we want to get names and types known as quickly as possible. This will help AI assisted programming as well, but even better is getting your intent in the code as early as possible.

Two ways we signal intent in programming is with names and with comments. To get assistance, you should use good names. I notice that the quality of the suggestions is based on the names I use.

I’m torn about what will happen with comments. Comments that describe code aren’t worth keeping, but may result in good suggestions. Hopefully, most programmers won’t leave those comments in.

I wonder how good Copilot can do with comments that are more declarative. And if it’s good at it, will a loose, declarative, pseudocode that is embedded in comments become the de facto Assistance Oriented Programming Language I’m looking for?

Initial Thoughts on AI Assisted Programming

I’m fairly active on StackOverflow, so when people started answering questions by copy-pasting from ChatGPT, I noticed. Unlike the AI generated code shared on social media (which appears to be cherry-picked), in actual use, the ChatGPT answers were nonsense. And so, StackOverflow banned its use.

The problem was not that ChatGPT was generating nonsense — the problem was that people were posting the nonsense without vetting it. That’s partly because the answerers were not even trying (they thought it was fun), but also, I would guess that many of them didn’t know if the answer was good or not. The answers were obvious nonsense to experts, but less so to beginners or laypeople.

Right now, AI generated code can’t be used without an expert. But is it useful if you are an expert?

Two years ago, I said that I wanted a Robotic Pair Programmer. In that post, I made some suggestions for what I’d want:

One way that seems fruitful to me is rare API calls. There will be times when I am using an API that appears very infrequently in the corpus or my own repositories. In that case, it should infer that I probably need more help than usual and offer up a tutorial or the top Stack Overflow questions.

And …

Another trigger might be my new comments. If I comment before I code, then it should be interpreted as a search query:

// Parse the JSON I get back from the data task

That should bring up links to likely API classes in the help pane (just like it would if I already knew the class). Maybe offer up imports to auto-add. Maybe offer a snippet.

One important thing is that this needs to be just in the IDE, not a chat interface. And last March, GitHub Copilot was released as a VSCode plugin. I ignored it back then, but seeing how far ChatGPT had come made me think that Copilot had a chance of being good.

The best thing about Copilot is its UI. It feels like autocomplete, but offers up more than usual autocomplete would. Sometimes it can complete a line — sometimes it can give you a few lines. In either case, since IDE users are already used to this interaction, it doesn’t get in the way. It’s also fast, which is vital.

It also does the comment trigger I wanted — meaning, I can comment my intention it will offer up snippets. Many of these are useful.

One worry I had in 2021 was that a system like this would offer bad suggestions often. And, that’s the main drawback to Copilot. Much of what it suggests is wrong. Some of it is scarily accurate, and the rest is in-between — not right, but still helpful. After a few weeks with it, I can’t decide if this is the right balance. I haven’t felt the need to turn it off, but also, I’m not sure I’d miss it once it was gone.

My main reason for trying it out was that it feels obvious to me that this is the future of programming. I program mostly for myself, so I really want the productivity gain. It costs $100/year, and I think that’s a no-brainer, because it pays for itself pretty quickly in productivity gains. If you use VSCode and one of the languages it supports, I’d recommend trying it out.

Kite: First Impressions

I wrote in Robotic Pair Programmers:

If search engines ever get eclipsed, I think it will be by something in the environment that just brings things to your attention when you need them. I want this most when I code, like a pair programmer that just tells me stuff I need to know at exactly the right time.

Kite, a code editing plugin, seems to be trying to go down this route. They have “AI powered code completions” for 16 languages in 16 code editors. Unfortunately, they don’t support Swift in Xcode yet. But, they do support Python, HTML, CSS, TypeScript, and JavaScript in VSCode, Sublime, and all of the JetBrains editors, so I could use it to work on App-o-Mat, which is a Django-based site.

In addition to code-completion, Kite also offers Copilot, which is a documentation pane that is synced to your cursor. Xcode already does this—the issue is that a lot of Apple’s documentation isn’t very complete. Kite only supports this for Python right now, but one addition to the standard docs is they link out to open-source projects that use the type or method you are editing.

Unfortunately, Kite doesn’t work on Apple Silicon, yet. It uses TensorFlow, which uses a particular instruction set that isn’t supported by Rosetta. Apple seems to be working on getting TensorFlow ported to M1.

So, I’ll have to wait to try it out. Very promising though.

Robotic Pair Programmers

If search engines ever get eclipsed, I think it will be by something in the environment that just brings things to your attention when you need them. I want this most when I code, like a pair programmer that just tells me stuff I need to know at exactly the right time.

When I’m in Xcode, there are so many times when I need information I don’t have. To get that information, I need to initiate a search. It breaks my flow to do this.

What I want is that information to just be in the environment.

One way this already happens with with code comments. In my source, I trust all of the editors, so I would like to see all of their comments and commit messages. This is actually possible if I turn on the Authors sidebar in Xcode.

But, what more could I get? Let’s say I index every Xcode project in GitHub, every iOS tutorial, every iOS question in Stack Overflow. Could that be distilled somehow and then shown to me at the right time?

One way that seems fruitful to me is rare API calls. There will be times when I am using an API that appears very infrequently in the corpus or my own repositories. In that case, it should infer that I probably need more help than usual and offer up a tutorial or the top Stack Overflow questions.

Another trigger might be my new comments. If I comment before I code, then it should be interpreted as a search query:

// Parse the JSON I get back from the data task

That should bring up links to likely API classes in the help pane (just like it would if I already knew the class). Maybe offer up imports to auto-add. Maybe offer a snippet. In Xcode it would be similar to the auto-suggested fixes for compiler errors.

This is just the beginning, and we can do a lot more. Whatever we do, we need to make sure that nearly every suggestion is useful, because we risk knocking the developer out of flow. Conserving flow should be the driver for how this works.