Tools created through AIs hurt my brain, I decided, because they’re often devoid of three fundamental qualities of good tools: reach, sociality, and finish.
In a way, thanks to AI, we’re making tools of ourselves (pun intended): much like we do when we create agentic skills out of interesting sessions, we condense our thoughts and intent into code. The resulting tool will carry our cognitive fingerprint: to use it effectively, you must be attuned to the way its creator thought, which is neither ideal nor fair. A tool with reach loosens that grip. It works for people who don’t think the way its maker does, or it grows until it can.
What makes tools social is a subtle combination of affordances, community participation, and the dialectic exchange between users and developers.
The lack of finish also means that tools are not thoughtfully designed, which makes their evolution improbable. A good tool can be refined, expanded, modularized, perhaps even merged with another tool. A thoughtful tool invites its own evolution; a careless one resists it. When you can’t tell where a tool’s boundaries are, it can’t grow.
Some of the best tools in history started as scratches-for-an-itch projects. They became universal because someone cared enough to finish and socialize them. For the stuff we create using LLMs to become tools, a README file and a catchy GitHub project name are not enough: We ought to ask ourselves whether what we’ve made can survive contact with someone else’s problem, and whether we’d trust it enough to maintain it.