I've been running a personal AI research system for the past few months — a vault that processes everything I read, watch, and listen to about AI and automatically surfaces the non-obvious conclusions. Every piece of content goes through a "ghost lens" that asks: what wasn't said? What's the idea hiding in the gap?
Here's what it's actually telling me about AI in 2026. Seven trends, one synthesis — and a breakthrough idea at the end that I think is genuinely undervalued.
Scope note: This isn't a list of tools. It's a structural read on where AI is going at the infrastructure, org design, and enterprise software layers. The stuff that compounds over years, not quarters.
The 7 trends
The dominant narrative is that AI is a software story. It's not. The largest value creation is happening in silicon, energy, infrastructure, and robotics — the physical layer underneath the models. One of the most telling signals from YC's Winter 2026 batch: a startup using AI to identify uranium deposits in North America to power AI data centers. That's a pick-and-shovel play three layers below the app layer. The thesis: infrastructure > narrative, every time. Models commoditize. Compute stays constrained. Energy is the real choke point.
Jensen Huang's framework at GTC 2026 is the clearest map I've seen of where we are. Three inflections: ChatGPT (AI enters public consciousness) → Reasoning models (grounded answers, first real revenue) → Agentic computing (AI that acts, not just responds). Each step requires roughly 100x more compute than the previous. That's a 10,000x compute increase in two years. The implication: whatever you're building on today's agentic infrastructure will look primitive within 18 months.
The distinction that matters: a chatbot answers questions. An agent performs tasks. Agents represent labor, not just assistance. When intelligence becomes cheap and scalable, every industry built on knowledge scarcity and friction changes. One implication most people are sleeping on: SaaS switching costs may collapse. Agents can automate data migrations that previously trapped customers. The moat shifts from lock-in to product quality and network effects. AI-native companies will operate with small human teams and large agent pools — this is already happening.
Teams obsess over which model to use. The actual strategic question is how you've built the operating layer — the harness — around it. The harness handles state, tools, permissions, coordination, and memory. Model quality is converging across providers. Harness design is diverging. Your real lock-in risk isn't vendor pricing — it's process lock-in. The teams building durable AI competitive advantage are building harnesses, not just prompts. This is the Nate Jones thesis that I keep coming back to: AI harnesses are the real strategy, not the models.
This is the Jack Dorsey thesis and it's the one I find most consequential. Hierarchy exists for one reason: to relay information up and down a chain at human scale. Every artifact a company produces — messages, emails, code, documents, recorded meetings — can now be fed into an AI model. Anyone in the company can query that model and get a real-time answer about what's happening. The relay function is gone. Block cut 40% of its workforce and is compressing from 5 management layers to 2–3, with an eventual goal of everyone reporting directly to the CEO. Three roles survive: ICs who do the work (augmented by agents), DRIs who own outcomes, and Player Coaches who build human capability. That's it.
Every organization has human-readable policies: compliance rules, brand guidelines, HR frameworks, security requirements. When they deploy AI, those policies don't automatically transfer. The AI doesn't know your rules. A startup called Moonbounce raised $12M in early 2026 to solve exactly this — converting existing org policies into consistent, predictable AI behavior. They called it "content moderation for the AI era" but the real product is a policy compiler for AI systems. The market they're underselling: every regulated industry (healthcare, finance, legal, government) where policy violations carry legal consequences. This is a $100M+ category in the making.
MIT's Andy McAfee put the sharpest framework on this at the HBR Strategy Summit in 2026. "Geek organizations" — defined by agile iteration, data-driven decisions, and distributed power — are pulling away from "waterfall organizations" that still run on process hierarchies. The mechanism: AI amplifies what's already there. Geek orgs use AI to go faster. Waterfall orgs use AI to generate more output in the same broken process. McAfee coined a term for the latter: workslop — process-busy AI output that looks productive but isn't. Power users in geek orgs generate 4–10x the output of median employees. The playbook: find them, study what they're doing, and systematically spread it.
Your ITSM platform is already your company's intelligence layer. Nobody's built the interface.
Here's the synthesis that keeps surfacing when I run these trends together.
Jack Dorsey's thesis says organizations need an intelligence layer — a queryable model of the company fed by all its digital artifacts. Every message, document, decision, and action. The management hierarchy that existed to relay information is replaced by an AI layer that anyone can query directly.
Now look at enterprise IT service management (ITSM). Every organization — regardless of size, industry, or geography — digitizes its entire operations through an ITSM platform. Every incident. Every change request. Every approval workflow. Every SLA breach. Every escalation. Every policy. The ITSM platform holds the complete digital record of how an organization actually operates at the infrastructure and process layer.
That's Dorsey's intelligence layer. It already exists. It's called your ITSM platform.
The gap: nobody has built the intelligence interface on top of it. Today, ITSM platforms are sophisticated ticketing systems with increasingly capable AI assistants bolted on. That's not the same thing as a queryable operational intelligence that a CIO can ask: "What is our current change risk given our upcoming system migrations and our historical incident rate on Thursdays?" and get a real, data-grounded answer.
The opportunity: whoever builds the genuine intelligence layer on top of ITSM operational data — not a chatbot, not a dashboard, but a real-time queryable model of operational reality — owns the operational brain of every enterprise customer. The incumbent vendors are moving too slowly. The window is open.
What I'm watching next
The trends that feel early but are compounding fast:
- Agentic security — YC W26 had a startup detecting website spoofs created by AI agents. As agents interact with the web on our behalf, a new attack surface opens. This is a real category emerging.
- The operator-to-investor pipeline — successful enterprise operators want to become angels and GPs but have no structured path to get there. The infrastructure doesn't exist. Leah Solivan (TaskRabbit) is making this argument publicly. The market is there.
- Multi-agent orchestration as a discipline — running one AI agent is a tool. Running ten AI agents in a coordinated workflow is a system design problem. The teams building sustainable AI advantage are already working on this. Most enterprise SaaS vendors haven't started.
"You're not going to lose your job to AI. You're going to lose your job to somebody using AI." — Jensen Huang, GTC 2026
The through-line across all seven trends: AI doesn't replace the underlying business logic. It amplifies whatever structure is already there. Good orgs get better faster. Bad orgs generate more workslop faster. The strategic question isn't "are we using AI?" — it's "what are we amplifying?"
If you're in enterprise SaaS: The harness, the policy layer, and the intelligence interface are the three strategic bets that will separate market leaders from followers in 2026–2028. Pick one and go deep.
I write when I find something worth writing about.
AI tools, enterprise GTM, and the occasional non-obvious synthesis. No filler.