Simple vs. Easy vs. AI

Sometimes simpler ways of working lead to better outcomes, but they have to be learned, and learning isn’t always easy–for humans. AI doesn’t care about easy, and that makes things interesting.

Insert Mode

Back in the late 1990s, when I was first starting my software career as an intern, I worked with a senior enginer named Stan. I admired the hell of Stan. Stan just knew stuff, and he was passionate about leveling up all the new, junior people flooding into software engineering during the first dotcom boom.

Two specific pieces of his advice ended up having an lasting impact on my career:

  1. Stop using perl and take a look at python. (While not the subject of this post, wow did this make the next decade a lot more fun.)
  2. Quick clicking your mouse around your editor and learn vi.

I wanted to be like Stan, so I ground out the learning curve and got vi fundamentals under my belt. And suddenly, my coding productivity went through the roof. I kept my hands on home row, and even my RSI cleared up.

Soon I was a full-throated vim advocate. But when I’d try to coax fellow programmers to learn vim, we’d often collide into the “weird” wall together:

Them: “So, how do you delete a line?”

Me: “dd”

Them: “…wtf?”

I mean, it’s hard to argue that “dd” is more intuitive than dragging your mouse over a line and hitting the delete key. Usually, at some point in those first few hours of struggling with “dd”, “cw”, “f(” and the like, the experiece would get abandoned as a hopeless exercise in Gobbledygook. I’d retreat to my desk and they’d retreat to their mousey editor.

My rationalization was that people sometimes think there’s only one curve to consider with a new tool or idea: easy to use. But my suspicion was there are two distinct curves at play here: easy to use, and easy to learn. Some things are harder to learn, but once you have them down, they’re very very usable day to day. But I just couldn’t put my finger on a succinct way to make this argument.

So when a certain Rich Hickey talk from Strange Loop made the rounds one day in 2011, it was like a lightning bolt to the brain.

Simple Made Easy

Simple made easy

Rich’s talk Simple Made Easy rendered a very clear distinction between two ideas:

  • Simple things are about a lack of interleaving. A simple thing is a “true primitive” that can be composed with other systems without that composition changing its behavior. Simple things are objectively simple. You can evaluate them as simple without relying on personal context.
  • Easy things are nearby. The easy way is the fastest way to accomplish something. And therefore, easiness is relative. It’s an expression of nearness based on where you are, what you already know.

As an example, if I’m a vim user and you’re an Emacs user, deleting a line in vim is easy for me–but perhaps not you. However, deleting a line in both editors is simple.

Revisiting my practical but less rigorous framing:

  • Easy to use for engineers usually means simpler. This is because professional engineers are (perhaps definitionally) composing things together. So simple primitives give them the most flexibility to build all kinds of things without emergent complexity or constraint.
  • Easy to learn is easy! The distance from what I already know to fluency with an new tool or idea is small.

From Rich’s talk:

learn vs. use

Why composition is the key to defining simplicity

The “working set” of ideas humans can consider at the same time together has remained relatively constant in recent history. Our brains are not significantly more capable in this regard than they were 100 years ago. And yet we’re engineering more and more sophisticated systems and products over time. How is this possible?

Abstraction makes it possible. Eventually we figure out the right simple, quintessential API to represent a bundle of primitives that everyone used to need to compose manually.

If that quintessential API can provide the utility 99%+ of the time without needing to deconstruct how it works, it becomes a complete abstraction, and that abstraction is itself a new primitive.

But if that API has subtle behaviors that differ depending on what other APIs its being composed with, the API consumer still has to understand a lot about how it was made: what are the complications, and in what specific circumstances do they occur? This need to understand the interior of the abstraction increases the “working set” of things that have to be understood by a fixed-sized brain!

So the “leaky abstraction” is not simple, and it will eventually fail as too complex.

And ultimately, this is a significant part of why vi is so successful–it’s excellent at composition. Once you learn a few operations like “d”, and movements like “w”, you realize you can just chain them together to do basically anything. You’re not really memorizing all the jumbled clusters of characters, you’re merely memorizing the handful of primitives.

Another great slide from Rich’s talk:

fixed size of human working set

What does this have to do with AI?

So far, AI isn’t significantly better at dealing with complexity than humans are. In fact, AI models our way of “thinking” so well, it benefits from simplicity as much or more than humans.

Humans are quite attracted to easy. After all, it requires time and energy to cross the gap to understanding of a new abstraction. In order to put in the hard work to get to a new plateau of simple, it’s reasonable for a human to ask:

  • Will this actually be better?
  • Will the abstraction end up leaky in practice?
  • Will others be willing to use this too so there is an ecosystem, employment opportunities, and the culture necessary to work together on this abstraction?
  • Essentially, is it worth it to learn this vs. my easier alternatives?

However: AI doesn’t care about easy. AI benefits from simple just as humans do, but if you tell it to use something new, it will. It has no need for proof or trust, and it doesn’t value the opportunity cost of its learning time.

Interesting new opportunities…

At work, we’ve built something ambitious and much simpler than the status quo ways to build application backends.

The primary complexity we’ve eliminated is:

  • Reasoning about concurrency
  • Reasoning about consistency
  • Reasoning about caching

It ends up those are three of the biggest concepts that trip up both humans and AI when it comes to getting backend code correct in practice.

But in order to provide that simplicity, we needed to replace that status quo with something new. New APIs for new abstractions, and a new way to work with them that requires a bit of learning. And as developers consider Convex, they pause and ask the usual questions: “Will it actually be better? Will it be worth it to learn?”

AIs do not hesitate! So LLM-powered codegen works very well on Convex.

This opens up an interesting consideration for our product, and I’d imagine for technology products in general:

  • Will software be able to more rapidly move to higher-level abstractions now because AIs value “learning time” at zero?
  • Can these new abstractions be validated ahead of human adoption by seeing how successful AI is with their designs?

This is triggering a rexamination of the way we think about Convex and how we design future versions of our API. It’s early, but very interesting.