The Backstory
I’m not a “script kiddie” or weekend hobbyist. I’m a UC Berkeley-trained Computer Scientist with over two decades of professional experience in Silicon Valley. I joined Iterable and EasyPost after their Series A rounds — both are now unicorns. At EasyPost, I managed 4 teams totaling ~20 engineers and delivered 8 figures of revenue.
I know what production systems look like at scale.
A few months ago, I read Peter Steinberger’s seminal post about shipping at inference speed. steipete is possibly one of the greatest programmers of this generation, and ClawdBot (now OpenClaw) was immediately on my radar. I was already racing to build my own AI Agent Swarm orchestrator — but I thought, “He’s good, but can I trust him?”
Then, three weeks ago, OpenClaw went viral. I went all-in. I’ve been hacking until 4am, 5am every night building out what I call the OpenClaw Command Center.
This year alone: After switching to Claude Code, I got a ~20x productivity boost. After adding OpenClaw, I got another 50x on top of that.
The math: 1000x productivity multiplier. That’s not hyperbole. That’s my lived experience.
What I’m Running Right Now
- 5 OpenClaw master instances — one for each domain of my life
- 10 satellite agents — specialized workers
- 1 “Godfather” orchestrator — coordinates everything
- 20+ scheduled tasks per instance — running 24/7
- Hardware: Mac Studio M2 Ultra + Mac Minis + MacBook Pro + VirtualBox VMs on top of old Windows host
Each OpenClaw instance is a “GM” (General Manager) that oversees one aspect of my personal or professional life. They advance my goals and keep me locked in — even when I’m sleeping.
I’m literally coding at the gym on my phone… via Slack… in between bench pressing 315 lbs.
The possibilities are endless. AGI is here.
See It In Action
The Vision: Bring the Work to Where Humans Are
I’ve seen the mockups and prototypes online — “the future of work” dashboards, agent orchestration UIs, yet-another-SaaS-tool. That’s the wrong direction.
Here’s the thing: humans are already in Slack.
I’ve worked at companies with dozens, hundreds, even thousands of Slack channels. That’s where work happens. That’s where context lives. That’s where people communicate.
So instead of building another tool that forces context-switching, I asked: what if I brought the visibility to where I already am?
The agents live in Slack threads — that’s their native habitat. Command Center doesn’t replace Slack; it gives you the bird’s-eye view you’re missing. It’s the air traffic control tower for your AI workforce.
Think of it like a Starcraft command center (yes, I’m dating myself):
- High APMs (actions per minute)
- Lots of AI workers running in parallel
- Ensure all agents are unblocked
- No idle workers sitting around
You need to see everything at once to coordinate effectively.
What I Built
Real-Time Visibility
The dashboard shows everything that matters:
- Session monitoring — Every active AI session, with model, tokens, cost, and context
- LLM Fuel Gauges — Never get surprised by quota limits (we’ve all been there)
- System Vitals — CPU, memory, disk — is your machine the bottleneck?
- Cost Intelligence — Know exactly what your AI workforce costs
Topic Tracking (Cerebro)
One of the most powerful features is automatic conversation organization. I call it Cerebro — inspired by the machine that augments Professor X’s innate telepathic abilities.
My setup: multiple Slack channels, one per project. Within each channel, one thread per feature. Cerebro auto-detects topics from threads and organizes them.
Each thread becomes a trackable unit of work:
- All topics across your workspace
- Thread counts per topic
- Jump directly into any conversation
This is possible because OpenClaw integrates deeply with Slack threading. Every message goes into the right thread, every thread has a topic, every topic is visible in the dashboard.
I worked really hard to allow OpenClaw to “stay focused” on topic. That discipline pays dividends.
Scheduled Tasks (Cron Jobs)
AI agents shouldn’t just react — they should proactively check on things, generate reports, clean up stale work. The cron dashboard shows:
- All scheduled tasks
- Run history
- Manual triggers
- Configuration at a glance
Privacy Controls
When demoing or taking screenshots, you can hide sensitive topics with one click. Learned this the hard way — you don’t want to accidentally share internal project names in a public post.
The Technical Details
Zero Dependencies, Instant Startup
Command Center is deliberately minimal:
- ~200KB total — dashboard + server
- No build step — runs immediately
- No React/Vue/Angular — vanilla JS, ES modules
- Single unified API endpoint — one call gets all dashboard data
Why this approach:
- AI agents can understand and modify it easily
- No waiting for webpack/vite compilation
- Works in any environment with Node.js
Security-First
Since this gives visibility into your AI operations, security was non-negotiable:
- Localhost by default — not exposed to network
- No external calls — zero telemetry, no CDNs
- Multiple auth modes — token, Tailscale, Cloudflare Access
- No secrets in UI — API keys never displayed
Real-Time Updates
The dashboard uses Server-Sent Events (SSE) for live updates. No polling, no websocket complexity. State refreshes every 2 seconds, cached on the backend to stay responsive under load.
The Philosophy: Use AI to Use AI
Here’s the key insight that changed everything:
Recursion is the most powerful idea in computer science.
Not loops. Not conditionals. Recursion — the ability for something to operate on itself. And the same principle applies to AI:
Use AI to use AI.
Think about it: Why are you manually configuring your AI agents? Why are you manually scheduling their work? Why are you manually routing tasks to the right model?
The agents should be doing that. The meta-work of managing AI should itself be done by AI.
This is how I gain an edge — not just over people still coding manually, but over vanilla OpenClaw users. I built the infrastructure for AI to optimize its own operations.
Advanced Job Scheduling (What’s Already Working)
After years of production experience with Spark, Airflow, Dagster, Celery, and Beanstalk — each with their own strengths and painful limitations — I had strong opinions about what an AI-native scheduler should look like.
I pulled concepts straight from CS162 (Operating Systems): multi-threading primitives, semaphores, mutex locks, process scheduling algorithms. These aren’t academic exercises — they’re exactly what you need when orchestrating dozens of AI agents competing for limited resources.
The scheduling primitives I’ve built:
- run-if-idle — Execute only when system has spare capacity (no resource contention)
- run-if-not-run-since — Guarantee freshness: “hasn’t run in 4 hours? run now”
- run-at-least-X-times-per-period — SLA enforcement: “must run 3x per day minimum”
- skip-if-last-run-within — Debouncing: “don’t spam if we just ran 10 min ago”
- conflict-avoidance — Greedy algorithm prevents overlapping heavy jobs
- priority-queue — Critical tasks preempt background work
This isn’t theoretical. It’s running in production right now across my 5 master instances and 10 satellite agents.
Intelligent Quota Management
I’m on the $200/month Claude Code Max plan. Without optimization, I’d blow through my weekly quota by Wednesday and be paying overage the rest of the week.
Instead, I’ve never paid a cent of Extra Usage. Conservatively, this system saves me at least $10,000/month in what would otherwise be API costs and overage charges.
How? The scheduling system is quota-aware:
- It knows when my weekly quota resets (Saturday night 10pm)
- It tracks current usage percentage via the API
- It batches low-priority work for off-peak hours
Real example: It’s Thursday night. I’ve used 75% of my weekly quota. The scheduler sees this and thinks: “We have 25% left, 2.5 days until reset, user is asleep. Time to burn cycles on background work.”
So it wakes up my agents and has them iterate on unit tests — grinding my monorepo toward 100% code coverage while I sleep. Work that needs to get done, but doesn’t need me present.
By the time quota resets Saturday, I’ve maximized value from every token. Then Sunday morning I have a full fresh quota for the real creative work.
LLM Routing: Right Model for the Job
Not every task needs Claude Opus 4.6.
I built a routing layer that matches tasks to models:
| Task Type | Model | Why |
|---|---|---|
| Code review, complex reasoning | Claude Opus 4.6 | Worth the tokens |
| Boilerplate, formatting, tests | Local models (Qwen, Llama) | Fast, free, good enough |
| RAG retrieval, embeddings | Local | Zero API cost |
| Documentation | Claude Sonnet | Sweet spot |
The router examines the task, estimates complexity, and picks the appropriate model. Heavy thinking goes to the heavy model. Routine work stays local.
This is “Use AI to Use AI” in action — I didn’t manually tag every task. The routing agent figures it out.
What’s Next
Multi-Agent Orchestration
The real power unlocks when agents work together:
- Swarm coordination patterns
- Structured handoff protocols
- Specialized agent routing (SQL tasks → SQL agent)
- Cross-session context sharing
Voice Harness
Next, I’m working on STT/TTS integration so I can orchestrate my agents with just my voice — while I’m out walking my dogs, playing basketball, lifting weights. The keyboard becomes optional.
Try It Yourself
Command Center is open source and free:
# Via ClawHub
clawhub install jontsai/command-center
# Or git clone
git clone https://github.com/jontsai/openclaw-command-center
cd openclaw-command-center
node lib/server.js
Critical setup: Enable Slack threading in your OpenClaw config:
slack:
capabilities:
threading: all
This is what enables proper topic tracking.
The Bigger Picture
We’re at an inflection point. AI agents aren’t just tools anymore — they’re becoming teammates. And like any team, you need visibility, coordination, and management.
Command Center is my answer to: “How do I actually manage an AI-native life?”
It’s not the final answer. It’s the foundation I’m building on. And I’m excited to share it with the community.
OpenClaw Command Center is MIT licensed. Star it on GitHub, try it out, and let me know what you think.
Links:
