Jack Dorsey published a manifesto this month called From Hierarchy to Intelligence. His thesis: the hierarchical org chart only exists because, for 2,000 years, we had no better way to flow information through large groups of humans. Now we do. Every company is an intelligence — a collection of artifacts (Slack, pull requests, meeting transcripts, docs, customer interactions) that can be modeled, queried, and steered as a single legible system. His word for this: mini-AGI. Every company can now be one.
He's right. And I'd add: individual engineers can too.
What I've been watching
I do my work as a full-stack engineer at a VC firm. For the last year and change I've been watching the tools I use reorganize themselves under me. In January 2025 the agentic harnesses got good enough to take on legacy code bases, not just greenfield. Claude Code came out of beta. Goose landed a month before it. Every week for a few months I'd push one of them to do something I didn't think it could do, and it would, and I'd update my mental model.
By this spring, my day-to-day job had shifted shape. Less typing, more directing. Less "am I allowed to use an agent for this?" and more "which agent fits this subproblem best?" I stopped calling Claude a copilot. It isn't. A copilot helps you do what you were already doing. This is something else.
Jack's frame
Jack names the pattern cleanly. The artifacts of work — every Slack thread, every PR review, every design doc — used to travel through a management chain that acted as a lossy compression layer for information. Put an intelligence on top of those artifacts and anyone in the company can ask the company questions and get real answers. The hierarchy stops being load-bearing. What you keep is three roles: the IC who builds, the DRI who owns the outcome, the player-coach who lifts the craft of everyone around them. The durable human skills are judgment, ownership, and the capacity to teach.
At a 6,000-person company, that's a transformation. You lay off 40% of your staff because if you were starting from scratch today you wouldn't build this shape. You start to measure depth from the CEO to any individual as a core metric. You pull layers out of the org the way you'd pull dead code.
At my scale — one engineer, one side project, one weekend — it looks different. But the structure is the same.
My weekend
Over the last sixteen hours I rewrote my personal site. The full story is in the post next to this one, but the relevant shape here is: five agents, plus me. Claude wrote every line of code. ChatGPT Atlas in agent mode drove the Cloudflare admin panel so I didn't have to. OpenAI Codex ran an adversarial security review before I cut prod DNS. I held the secrets, the taste, the red pen, and the thesis.
I designed a seven-role team. Not seven parallel agents — a framework Claude used internally to pressure-test every decision: Principal Engineer, Cloudflare Platform Engineer, SRE, DevEx, Security, Frontend, Lead. I set weights: security and correctness first, minimalism second. When Codex's adversarial review came back with nine findings, those weights did the triage. Six shipped, three were deferred. I never had to write a single triage rule. The "organization" I'd designed did it.
This is the pattern at my scale: information flow is the organization. Seven hundred and fifteen thousand tokens of working context are my artifact layer. One million total. The full architectural plan, every phase review, every bug and its root cause, the adversarial security report, the team weightings, every commit message — all of it in one continuous working memory. Jack built intelligences on top of his company's artifacts. I built one on top of my own working session.
The three roles, collapsed
In a one-person workflow, the three durable human skills Jack names — judgment, ownership, coaching — don't disappear. They collapse into the one human at the desk. Me.
Judgment showed up when Claude proposed architectural directions and I chose among them. When I rejected a complex per-branch R2 namespacing scheme in favor of the simple version. When I deferred webhook debounce because the complexity didn't justify the v1 cost. Judgment is a form of no.
Ownership showed up when the eight-bug chase in the middle of the session stopped being Claude's problem and became ours. I didn't hand off the failure. I sat with it. I added diagnostic logs. I watched the logs. I read them out loud. Ownership is not delegable even when the work is.
Coaching, the weird one — I didn't expect to apply this inside a pair with an agent. But it turned up. When Claude proposed a fix, I'd ask: does this match the pattern we agreed to in the team plan? When security got surfaced, I'd say: what would the Security Engineer on our team say about this? I was coaching the model back onto its own self-designed framework. The model noticed. Its next proposal landed cleaner.
The roadmap is the limiting factor
Jack's sharpest line in the podcast: our limiting factor as a company is our roadmap. Customers will ask for features that don't exist, and the mark of the next generation of companies is being able to compose those features in real time against a capability set — or, if they can't, treating the gap as the signal for what to build next.
The image bug that triggered my whole rewrite was exactly this pattern, from the other direction. Prior AI agents had been asked to fix the bug. They'd iterated on symptoms. None of them said: the bug is telling you something about your architecture. I showed up with a thesis that they should have been able to generate themselves, and the gap between their output and my thesis was the roadmap.
Signal from customers becomes your roadmap. Signal from bugs becomes your architecture. It's the same structural claim. You listen to the real world through artifacts and you let the intelligence steer.
What Jack is understating
Jack is sharp about agency — his anecdote about OpenClaw users wanting to contain a whole AGI in a Mac Mini and make it tangibly theirs is one of the best things in the podcast. But I think he's still underselling how fast this scales down.
The industry will read his thesis as "every company can be a mini-AGI." I read it as "every person can be one." The same structural shift works at individual scale if you have durable working memory (context window), multiple specialized agents to delegate across, and the willingness to hold the secrets + taste + thesis boundary yourself. That's available to anyone with a laptop tonight. It's not a company-scale transformation. It's a how-you-work transformation, for one human.
My site now rebuilds itself when I edit a Notion page. The webhook fires, a Pages Function verifies it, GitHub Actions runs the build, Cloudflare deploys the result. Ninety seconds, start to finish. I trigger it by typing. That's not a company. That's one person's infrastructure behaving like a legible intelligence. And I'd never have thought to build it this way a year ago, because a year ago it wouldn't have worked.
Where it goes
I don't know how to end this yet. I'm writing it inside the same context window where I designed my agent team twelve hours ago, and it's the first time I've been able to hold a session of this length coherently. The next time I do this, it'll be bigger. The tooling will be better. The context windows will be longer. The agents I delegate to will have deeper specializations and sharper edges.
Jack calls the final layer of a company-as-intelligence the world model: a deep enough understanding of itself and its customers that it can compose new capabilities in real time. I think for individuals, the world model is your working memory. And the durable human skill, above all the others, is having a point of view strong enough to steer it.
Taste. Intent. Thesis.
That's what the human brings. That's what's not delegable. That's the whole job now.

