- Published on
Everything is Context, Everything is CLI
- Authors

- Name
- Adrian Gan
- @AdrianGanJY
The Problem With Smart
There's a quote by Dan Koe that stopped me in my tracks:
"The measure of intelligence is one's ability to get things they want out of life."
LLMs are smart. Impressively smart. But smart might not be useful. A genius locked in a room with no knowledge of the outside world is still just a genius in a room. They can reason, but they can't act with judgment, because they have no context.
This is the fundamental gap in how most people are using AI today. They give it a prompt. They get back an answer. But the answer floats — disconnected from the messy, complex reality of their actual lives and businesses.
I've been building an Agentic Hub — a system where AI agents don't just respond to prompts, but operate with persistent memory, structured knowledge, and real business context. And through that process, I've arrived at what I think is a deceptively simple insight:
Everything is context.
The Context Pyramid
Most people building with AI right now have two things:
- Raw data on the ground — files, databases, documents, spreadsheets. Mass, but no structure.
- A floating vision in the cloud — high-level goals, vague strategies, aspirational outcomes.
Nothing connects them. There's mass on earth and an imaginary island floating in the sky. So the agent is "smart" but ungrounded. It can reason abstractly, but it can't reason about your specific situation with depth and judgment.
What I'm building is a solid pyramid — where every layer supports the one above it.
┌─────────┐
│ PINNACLE │ ← Principles, decisions, judgment
├─────────┤
│ RULES │ ← Patterns, SOPs, constraints
├─────────┤
│ KNOWLEDGE│ ← Processed, organized, linked
├─────────┤
│ SOURCES │ ← Raw data, APIs, documents
└─────────┘
Sources feed into knowledge, which feeds into rules, which feeds into principles, which feed into decisions. Remove a layer and the pyramid wobbles. But when every layer is solid and connected — the pinnacle isn't a vision, it's a reality.
And here's the part that matters: the pyramid is the competitive advantage. Not the AI model. Everyone has access to GPT, Claude, Gemini. The differentiator is never the model — it's the context pyramid. A context pyramid is built over time, through lived experience, through real business operations. It can't be copy-pasted. It can't be bought. It can only be built.
The Context Management Problem
But context has a dark side. Context windows are limited. Too much irrelevant context is noise — it's actively harmful. The goal isn't "all context." The goal is relevant context, at the right time, at the right zoom level.
This means you need two systems working in parallel:
A context store — quantity. Connect to every source. Import, organize, link. Notion pages, Google Sheets, Airtable records, HR databases, financial systems. Leave no stone unturned.
A context refinery — quality. Rules. Principles. Memory. Processed knowledge that distills thousands of data points into the handful that matters right now.
The power doesn't lie in either alone. It lies in scaling both simultaneously. That little .rules folder in your agent config? Those few kilobytes of curated principles? They shape every interaction. But they're only powerful when grounded on a mountain of real, processed, connected data underneath.
High-level principles without grounding are philosophy. Raw data without principles is noise. The pyramid needs both, and everything in between.
Everything is CLI
So how do you actually fill the pyramid? How do you connect an AI agent to the messy reality of a business?
You need APIs. Every system your business runs on — HR, accounting, inventory, CRM — has data locked behind its own interface. To bring that data into the pyramid, you need to connect to it programmatically.
And here's the second insight I've landed on: CLI is the universal adapter.
API → CLI wrapper → Agent tool → Agent capability
It's almost embarrassingly simple. A CLI is just stdin/stdout. The most basic, most composable interface pattern in computing. But that simplicity is exactly what makes it powerful for agents.
When I built a Notion CLI to import our company's Notion workspace, the entire migration happened in a single agent session. When I connected Google Workspace through a CLI wrapper, our Google Sheets and Drive became instantly searchable and readable by agents. Each time, the pattern was the same: find the API, wrap it in a CLI, hand it to the agent.
Each CLI you build expands what the agent can do in the real world. And here's what makes this a platform play, not just a convenience pattern: every system with an API can be wrapped in a CLI. Every CLI becomes an agent tool. Therefore, every system becomes agent-accessible. The more CLIs you build, the more capable your agents become. It compounds.
The Model
This brings us to what I think is the simplified model of a next-generation knowledge work product:
Agent = IDE + Context Pack + CLI Library
Where:
- IDE — the agentic environment where the agent lives and operates. Under the hood, it's always an agentic CLI.
- Context Pack — the curated, structured knowledge that the agent operates with. Not a dump, but a pack — a carefully assembled bundle of sources, knowledge, rules, and principles.
- CLI Library — the collection of domain-specific CLIs that give the agent hands to touch the real world.
I call this a model, not an axiom — it's a useful simplification of reality, not a self-evident truth. Models are valuable precisely because they're incomplete. They tell you what to focus on (context, CLIs) and what to deprioritize (everything not in the formula).
The Inverse: What's NOT in the Formula
Here's where the model gets interesting. If we accept that an agent is IDE + Context Pack + CLI Library, we can look at the inverse — and ask what's not in the formula.
GUI isn't in the formula.
I've spent a lot of time planning graphical interfaces for internal tools. Dashboard here, portal there, admin panel somewhere else. But when I look at the model, GUI is conspicuously absent. The agent doesn't need a GUI. The agent needs CLIs to act and context to reason.
And here's the uncomfortable truth: GUI is 3-4x more expensive to build than backend + API. It creates 3-4x more feedback loops (every user has preferences). And if the trend is toward people working inside the agentic IDE — asking agents to do things via CLI rather than clicking through a webapp — then GUI might be the biggest time sink in my roadmap.
This doesn't mean GUI is worthless. It means GUI might not be first. Build the API, wrap it in a CLI, validate the workflow with agents, then build the GUI only where it's genuinely needed. CLI-first, GUI-maybe.
What This Enables
The model scales beyond one user. It scales to teams.
Want to build an AI-powered senior employee for your POS sales team? Define the composition:
- User: Kit Yang, POS BD Executive
- Context needed: Product catalog, pricing, customer templates
- CLIs needed: Schema CLI (read-only), CRM lookup
- Result: an agent that can draft customer proposals, look up pricing, generate case studies
Want to build a virtual assistant for your operations team?
- Same IDE, different context pack, different CLIs
- Result: an agent that checks serial numbers, looks up warranties, drafts replies
The base model is commoditized — everyone has access to the same LLMs. The moat is the context pyramid and the CLI library. And those are built through lived experience — operating the business, serving customers, solving real problems. They compound over time. They're impossible to replicate without doing the actual work.
The Philosophical Punchline
I've derived four conclusions from this model:
AI without context is a toy. It can chat. It can generate. But it can't think about your business with judgment.
Context without tools is a library. Beautiful, organized, but passive. The agent can read, but it can't act.
Context + tools makes an agent. And an agent with a complete context pyramid doesn't just follow instructions. It makes decisions. Good ones.
The context pyramid is the moat. Not the model. Not the prompt engineering. Not the framework. The pyramid. Built over time. Impossible to replicate. Compounds forever.
Every source I connect, every rule I write, every principle I codify, every CLI I build — it all compounds. The pyramid grows. The agent gets sharper. And the gap between what's possible and what others think is possible widens.
Making Impossible Possible. That's always been the name. Now I have the formula.
Adrian Gan is the CEO of Mipos Sdn Bhd and builder of the Agentic Hub — a personal command center for AI-augmented knowledge work. He's been building with AI agents since 2025, focusing on context engineering, automation, and the pursuit of business freedom by 40.