Blog

I don't want to push pixels anymore

Kenwood's first interface was ready and I didn't want to use it. Not because it was bad. Because after months of operating software through conversation with AI agents, clicking through icons and menus felt like overhead. The shift isn't conversation replacing pixels; it's focusing on getting the thing done instead of learning the machine.

Sam Sabey|
I don't want to push pixels anymore

The first cut of Kenwood's interface was ready. Dashboard, lists, calendar, kanban board. All the views I'd spec'd, all the layouts I'd designed, all the interactions wired up and working. I looked at it and in a moment of inspiration it occurred to me there's a better way to do this, and I know how to build it.

Not because the design was bad. Because I didn't want to use it.

What I was tired of

Like many I've been using software for some time now. Every product requires you to learn the way it thinks. Map my understanding to som new vocabulary. Find the button hidden behind a menu moved and renamed in the last update. Parse a graphical icon-heavy flood of pixels to figure out which coloured square means the thing I'm looking for.

I've got a lot to do. I don't want to learn this new product, or map between the chosen words that are different from the industry terms everyone else uses, or hunt, experiment, get lucky. That's not the setting you are looking for.

I want to get the thing done.

What I built instead

Across the three products I'm building right now; Kenwood, May Belle, and Beans; the pattern is similar. At the core is a CLI that the agent operates. The CLI does the work: creates entities, transitions states, reads and writes markdown, manages data. The browser refreshes to show me the result. My interface is the conversation.

I tell the agent what needs to happen. The agent figures out which commands to run, which files to read, which data to change. The browser is a display surface; a window into the state of things, not the place where I operate them.

The agent reads the help docs, understands the commands, and handles the edge cases. I stopped navigating. I started talking.

What shifted

It took a while to realise this isn't about conversation replacing pixels. It's about focus.

Operating software through an interface means learning the machine. Where the buttons are, what the icons mean, how this product organises its concepts. That's overhead. Time spent on the machine instead of the work.

Operating software through conversation means stating what needs doing. The machine figures out the rest. The focus stays on the outcome, not the operation.

That's the shift. Not a better interface. A different relationship with the machine entirely.

What this means for everything else

Every piece of software I've used in the last decade was built around the assumption that a human would click through it. The menus, the icons, the drag targets, and the dropdowns; all designed for a person who would learn the interface and operate it by hand.

That assumption is starting to look fragile.

I'm not saying every pixel-based interface is dead. Spatial reasoning still beats verbal reasoning for some tasks; a calendar view, a kanban board, a data visualisation. Those earn their place. But the interactive input layer; the forms, the buttons, and the navigation; that's the part that's eroding.

I don't know how fast this moves. I know it's already moved for me. And I'm not sure what it means for the billions of dollars of software built on the other side of that assumption.


I wrote most of this at my desk. The AI generated the images. I reviewed them, uploaded the selfies, and did the publishing out on the trails on my bike.