Building Mission Control for My AI Workforce: Introducing OpenClaw Command Center
The Backstory
I’m not a “script kiddie” or weekend hobbyist. I’m a UC Berkeley-trained Computer Scientist with over two decades of professional experience in Silicon Valley. I joined Iterable and EasyPost after their Series A rounds — both are now unicorns. At EasyPost, I managed 4 teams totaling ~20 engineers and delivered 8 figures of revenue.
I know what production systems look like at scale.
A few months ago, I read Peter Steinberger’s seminal post about shipping at inference speed. steipete is possibly one of the greatest programmers of this generation, and ClawdBot (now OpenClaw) was immediately on my radar. I was already racing to build my own AI Agent Swarm orchestrator — but I thought, “He’s good, but can I trust him?”
Then, three weeks ago, OpenClaw went viral. I went all-in. I’ve been hacking until 4am, 5am every night building out what I call the OpenClaw Command Center.
This year alone: After switching to Claude Code, I got a ~20x productivity boost. After adding OpenClaw, I got another 50x on top of that.
The math: 1000x productivity multiplier. That’s not hyperbole. That’s my lived experience.
What I’m Running Right Now
- 5 OpenClaw master instances — one for each domain of my life
- 10 satellite agents — specialized workers
- 1 “Godfather” orchestrator — coordinates everything
- 20+ scheduled tasks per instance — running 24/7
- Hardware: Mac Studio M2 Ultra + Mac Minis + MacBook Pro + VirtualBox VMs on top of old Windows host
Each OpenClaw instance is a “GM” (General Manager) that oversees one aspect of my personal or professional life. They advance my goals and keep me locked in — even when I’m sleeping.
I’m literally coding at the gym on my phone… via Slack… in between bench pressing 315 lbs.
The possibilities are endless. AGI is here.
See It In Action
The Vision: Bring the Work to Where Humans Are
I’ve seen the mockups and prototypes online — “the future of work” dashboards, agent orchestration UIs, yet-another-SaaS-tool. That’s the wrong direction.
Here’s the thing: humans are already in Slack.
I’ve worked at companies with dozens, hundreds, even thousands of Slack channels. That’s where work happens. That’s where context lives. That’s where people communicate.
So instead of building another tool that forces context-switching, I asked: what if I brought the visibility to where I already am?
The agents live in Slack threads — that’s their native habitat. Command Center doesn’t replace Slack; it gives you the bird’s-eye view you’re missing. It’s the air traffic control tower for your AI workforce.
Think of it like a Starcraft command center (yes, I’m dating myself):
- High APMs (actions per minute)
- Lots of AI workers running in parallel
- Ensure all agents are unblocked
- No idle workers sitting around
You need to see everything at once to coordinate effectively.
What I Built
Real-Time Visibility
The dashboard shows everything that matters:
- Session monitoring — Every active AI session, with model, tokens, cost, and context
- LLM Fuel Gauges — Never get surprised by quota limits (we’ve all been there)
- System Vitals — CPU, memory, disk — is your machine the bottleneck?
- Cost Intelligence — Know exactly what your AI workforce costs
Topic Tracking (Cerebro)
One of the most powerful features is automatic conversation organization. I call it Cerebro — inspired by the machine that augments Professor X’s innate telepathic abilities.
My setup: multiple Slack channels, one per project. Within each channel, one thread per feature. Cerebro auto-detects topics from threads and organizes them.
Each thread becomes a trackable unit of work:
- All topics across your workspace
- Thread counts per topic
- Jump directly into any conversation
This is possible because OpenClaw integrates deeply with Slack threading. Every message goes into the right thread, every thread has a topic, every topic is visible in the dashboard.
I worked really hard to allow OpenClaw to “stay focused” on topic. That discipline pays dividends.
Scheduled Tasks (Cron Jobs)
AI agents shouldn’t just react — they should proactively check on things, generate reports, clean up stale work. The cron dashboard shows:
- All scheduled tasks
- Run history
- Manual triggers
- Configuration at a glance
Privacy Controls
When demoing or taking screenshots, you can hide sensitive topics with one click. Learned this the hard way — you don’t want to accidentally share internal project names in a public post.
The Technical Details
Zero Dependencies, Instant Startup
Command Center is deliberately minimal:
- ~200KB total — dashboard + server
- No build step — runs immediately
- No React/Vue/Angular — vanilla JS, ES modules
- Single unified API endpoint — one call gets all dashboard data
Why this approach:
- AI agents can understand and modify it easily
- No waiting for webpack/vite compilation
- Works in any environment with Node.js
Security-First
Since this gives visibility into your AI operations, security was non-negotiable:
- Localhost by default — not exposed to network
- No external calls — zero telemetry, no CDNs
- Multiple auth modes — token, Tailscale, Cloudflare Access
- No secrets in UI — API keys never displayed
Real-Time Updates
The dashboard uses Server-Sent Events (SSE) for live updates. No polling, no websocket complexity. State refreshes every 2 seconds, cached on the backend to stay responsive under load.
The Philosophy: Use AI to Use AI
Here’s the key insight that changed everything:
Recursion is the most powerful idea in computer science.
Not loops. Not conditionals. Recursion — the ability for something to operate on itself. And the same principle applies to AI:
Use AI to use AI.
Think about it: Why are you manually configuring your AI agents? Why are you manually scheduling their work? Why are you manually routing tasks to the right model?
The agents should be doing that. The meta-work of managing AI should itself be done by AI.
This is how I gain an edge — not just over people still coding manually, but over vanilla OpenClaw users. I built the infrastructure for AI to optimize its own operations.
Advanced Job Scheduling (What’s Already Working)
After years of production experience with Spark, Airflow, Dagster, Celery, and Beanstalk — each with their own strengths and painful limitations — I had strong opinions about what an AI-native scheduler should look like.
I pulled concepts straight from CS162 (Operating Systems): multi-threading primitives, semaphores, mutex locks, process scheduling algorithms. These aren’t academic exercises — they’re exactly what you need when orchestrating dozens of AI agents competing for limited resources.
The scheduling primitives I’ve built:
- run-if-idle — Execute only when system has spare capacity (no resource contention)
- run-if-not-run-since — Guarantee freshness: “hasn’t run in 4 hours? run now”
- run-at-least-X-times-per-period — SLA enforcement: “must run 3x per day minimum”
- skip-if-last-run-within — Debouncing: “don’t spam if we just ran 10 min ago”
- conflict-avoidance — Greedy algorithm prevents overlapping heavy jobs
- priority-queue — Critical tasks preempt background work
This isn’t theoretical. It’s running in production right now across my 5 master instances and 10 satellite agents.
Intelligent Quota Management
I’m on the $200/month Claude Code Max plan. Without optimization, I’d blow through my weekly quota by Wednesday and be paying overage the rest of the week.
Instead, I’ve never paid a cent of Extra Usage. Conservatively, this system saves me at least $10,000/month in what would otherwise be API costs and overage charges.
How? The scheduling system is quota-aware:
- It knows when my weekly quota resets (Saturday night 10pm)
- It tracks current usage percentage via the API
- It batches low-priority work for off-peak hours
Real example: It’s Thursday night. I’ve used 75% of my weekly quota. The scheduler sees this and thinks: “We have 25% left, 2.5 days until reset, user is asleep. Time to burn cycles on background work.”
So it wakes up my agents and has them iterate on unit tests — grinding my monorepo toward 100% code coverage while I sleep. Work that needs to get done, but doesn’t need me present.
By the time quota resets Saturday, I’ve maximized value from every token. Then Sunday morning I have a full fresh quota for the real creative work.
LLM Routing: Right Model for the Job
Not every task needs Claude Opus 4.6.
I built a routing layer that matches tasks to models:
| Task Type | Model | Why |
|---|---|---|
| Code review, complex reasoning | Claude Opus 4.6 | Worth the tokens |
| Boilerplate, formatting, tests | Local models (Qwen, Llama) | Fast, free, good enough |
| RAG retrieval, embeddings | Local | Zero API cost |
| Documentation | Claude Sonnet | Sweet spot |
The router examines the task, estimates complexity, and picks the appropriate model. Heavy thinking goes to the heavy model. Routine work stays local.
This is “Use AI to Use AI” in action — I didn’t manually tag every task. The routing agent figures it out.
What’s Next
Multi-Agent Orchestration
The real power unlocks when agents work together:
- Swarm coordination patterns
- Structured handoff protocols
- Specialized agent routing (SQL tasks → SQL agent)
- Cross-session context sharing
Voice Harness
Next, I’m working on STT/TTS integration so I can orchestrate my agents with just my voice — while I’m out walking my dogs, playing basketball, lifting weights. The keyboard becomes optional.
Try It Yourself
Command Center is open source and free:
# Via ClawHub
clawhub install jontsai/command-center
# Or git clone
git clone https://github.com/jontsai/openclaw-command-center
cd openclaw-command-center
node lib/server.js
Critical setup: Enable Slack threading in your OpenClaw config:
slack:
capabilities:
threading: all
This is what enables proper topic tracking.
The Bigger Picture
We’re at an inflection point. AI agents aren’t just tools anymore — they’re becoming teammates. And like any team, you need visibility, coordination, and management.
Command Center is my answer to: “How do I actually manage an AI-native life?”
It’s not the final answer. It’s the foundation I’m building on. And I’m excited to share it with the community.
OpenClaw Command Center is MIT licensed. Star it on GitHub, try it out, and let me know what you think.
Links:
SSH Commit Signing Part 2: Automation and Multi-Machine Setup
In my previous post, I covered the basics of signing Git commits with SSH keys instead of GPG. This post covers the automation and multi-machine setup I built to make SSH signing seamless across 12+ machines.
The Challenge
Managing SSH commit signing across multiple machines introduces several challenges:
- Multiple keys - Each machine has its own SSH key
- Multiple emails - Different projects use different commit emails
- Verification - Git needs to verify signatures from any of your keys
- GitHub - All keys need to be registered as signing keys
- Consistency - Configuration should be identical across machines
The Solution: Centralized Configuration
I created three interconnected repositories:
- dotfiles (public) - Git configuration and aliases
- bash-ftw (public) - Bash utilities and installation helpers
- pubkeys (private) - SSH public keys and automation scripts
Key Components
1. Dynamic Key Selection
Instead of hardcoding a specific key per machine, use ssh-agent to automatically select the first available key:
# In ~/.gitconfig or ~/code/dotfiles/.gitconfig
[gpg]
format = ssh
[gpg "ssh"]
allowedSignersFile = ~/.ssh/allowed_signers
defaultKeyCommand = ssh-add -L | head -n1
[commit]
gpgsign = true
Benefits:
- No per-machine configuration needed
- Works with any key loaded in ssh-agent
- Portable across all your machines
2. The allowed_signers File
Git’s allowed_signers file verifies commit signatures. The format is:
email key-type key-data comment
The key insight: Create a cross-product of all your emails × all your keys:
hello@jontsai.com ssh-ed25519 AAAAC3... laptop-key
hello@jontsai.com ssh-rsa AAAAB3... desktop-key
hello@jontsai.com ssh-ed25519 AAAAC3... server-key
user@example.com ssh-ed25519 AAAAC3... laptop-key
user@example.com ssh-rsa AAAAB3... desktop-key
user@example.com ssh-ed25519 AAAAC3... server-key
This allows Git to verify commits signed by any of your keys with any of your email addresses.
3. Automated Generation Script
Create scripts/generate_allowed_signers.sh:
#!/bin/bash
# Generate allowed_signers file for Git SSH commit signing
set -e
EMAILS_FILE="${EMAILS_FILE:-emails.txt}"
OUTPUT="${OUTPUT:-allowed_signers}"
# Read emails (filter out comments and empty lines)
emails=$(grep -v '^#' "$EMAILS_FILE" | grep -v '^[[:space:]]*$' || true)
# Clear output file
> "$OUTPUT"
# Enable nullglob for Mac compatibility
shopt -s nullglob
# Process all .pub files
for pubkey in *.pub delegates/*.pub; do
if [ -f "$pubkey" ]; then
key_content=$(cat "$pubkey")
# For each key, add entry for each email
echo "$emails" | while IFS= read -r email; do
echo "$email $key_content" >> "$OUTPUT"
done
fi
done
shopt -u nullglob
echo "Generated $OUTPUT with $(wc -l < "$OUTPUT") keys"
Create an emails.txt file:
# Email addresses used for git commits
hello@jontsai.com
jontsai@users.noreply.github.com
4. Makefile for Easy Management
Create a Makefile to orchestrate everything:
.PHONY: help install install-authorized_keys install-allowed_signers github-signing-keys
## help - Display available targets
help:
@cat Makefile | grep '^## ' --color=never | cut -c4- | \
sed -e "`printf 's/ - /\t- /;'`" | column -s "`printf '\t'`" -t
## authorized_keys - Generate authorized_keys file
authorized_keys: $(wildcard *.pub) $(wildcard delegates/*.pub)
cat *.pub delegates/*.pub > authorized_keys
chmod 600 authorized_keys
## allowed_signers - Generate allowed_signers file
allowed_signers: emails.txt scripts/generate_allowed_signers.sh $(wildcard *.pub)
scripts/generate_allowed_signers.sh
chmod 600 allowed_signers
## install - Install authorized_keys and allowed_signers to ~/.ssh
install: authorized_keys allowed_signers
cp -v authorized_keys ~/.ssh/authorized_keys
cp -v allowed_signers ~/.ssh/allowed_signers
chmod 600 ~/.ssh/authorized_keys ~/.ssh/allowed_signers
## github-signing-keys - Add all keys to GitHub as signing keys
github-signing-keys:
scripts/add_github_signing_keys.sh
5. Automated GitHub Key Upload
Create scripts/add_github_signing_keys.sh:
#!/bin/bash
# Add all public keys to GitHub as signing keys using gh CLI
set -e
# Check if gh is installed
if ! command -v gh &> /dev/null; then
echo "ERROR: gh CLI is not installed"
echo "Install from: https://cli.github.com/"
exit 1
fi
# Check authentication
if ! gh auth status &> /dev/null; then
echo "ERROR: Not authenticated with GitHub"
echo "Run: gh auth login"
exit 1
fi
# Check for required permissions
echo "Checking for required permissions..."
if ! gh ssh-key list &> /dev/null; then
echo "ERROR: Missing required permission scope"
echo ""
echo "To grant this permission, run:"
echo " gh auth refresh -h github.com -s admin:ssh_signing_key"
exit 1
fi
echo "Adding all public keys to GitHub as signing keys..."
success_count=0
skip_count=0
error_count=0
for pubkey in *.pub delegates/*.pub; do
if [ -f "$pubkey" ]; then
title=$(basename "$pubkey" .pub)
echo -n "Adding $title... "
output=$(gh ssh-key add --type signing "$pubkey" --title "$title" 2>&1)
exit_code=$?
if [ $exit_code -eq 0 ]; then
echo "done"
success_count=$((success_count + 1))
elif echo "$output" | grep -q "already exists"; then
echo "already exists (skipped)"
skip_count=$((skip_count + 1))
else
echo "FAILED"
echo " Error: $output"
error_count=$((error_count + 1))
fi
fi
done
echo ""
echo "Summary: $success_count added, $skip_count skipped, $error_count errors"
6. Git Aliases for Viewing Signatures
Add to your ~/.gitconfig:
[alias]
# Compact log with signature status
slog = log --pretty=format:\"%C(auto)%h %G? %C(blue)%an%C(reset) %s %C(dim)(%ar)%C(reset)\"
# Full signature details
logs = log --show-signature
Signature status codes:
G= Good signatureB= Bad signatureU= Good signature, unknown validityN= No signature
7. Bash Installation Helper
Add to your ~/.bashrc or bash-ftw:
# GitHub CLI installation with OS detection
function install-gh {
KERNEL=$(uname -s)
if [[ $KERNEL == 'Darwin' ]]; then
echo "Installing GitHub CLI via Homebrew..."
brew install gh
elif [[ $KERNEL == 'Linux' ]]; then
echo "Installing GitHub CLI via apt..."
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | \
sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg && \
sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | \
sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null && \
sudo apt update && sudo apt install gh -y
else
echo "Visit https://cli.github.com for installation instructions"
return 1
fi
echo "GitHub CLI installed! Run 'gh auth login' to authenticate"
}
Complete Setup Workflow
Initial Setup (One Time)
- Clone your dotfiles:
cd ~/code git clone https://github.com/yourusername/dotfiles - Create your pubkeys repository structure:
mkdir -p ~/code/pubkeys/{scripts,delegates,obsolete} cd ~/code/pubkeys # Copy all your .pub files here cp ~/.ssh/*.pub . # Create emails.txt cat > emails.txt <<EOF # Your git commit emails you@example.com you@users.noreply.github.com EOF -
Copy the scripts (from examples above) into
scripts/ - Install GitHub CLI:
install-gh # or manually from https://cli.github.com gh auth login gh auth refresh -h github.com -s admin:ssh_signing_key - Install configuration:
cd ~/code/pubkeys make install cd ~/code/dotfiles cp .gitconfig ~/.gitconfig - Upload keys to GitHub:
cd ~/code/pubkeys make github-signing-keys
Per-Machine Setup
On each new machine:
# 1. Clone repos
cd ~/code
git clone https://github.com/yourusername/dotfiles
git clone your-pubkeys-repo # if you made it a git repo
# 2. Install
cd ~/code/pubkeys && make install
cd ~/code/dotfiles && cp .gitconfig ~/.gitconfig
# 3. Configure ssh-agent (if needed)
ssh-add ~/.ssh/id_ed25519
# 4. Test it
cd some-repo
git commit -m "test signed commit"
git log --show-signature -1
Benefits
- Zero per-machine configuration - Same setup everywhere
- Automatic key selection - Works with any key in ssh-agent
- Multi-email support - All your commit emails are verified
- One-command GitHub sync -
make github-signing-keys - Easy verification -
git slogshows signature status inline - Makefile dependencies - Auto-regenerates when keys/emails change
Lessons Learned
1. Mac Compatibility
Mac’s bash 3.2 doesn’t support <<< heredoc syntax. Use pipe instead:
# Don't do this (fails on Mac)
while read line; do ...; done <<< "$var"
# Do this (works everywhere)
echo "$var" | while read line; do ...; done
2. Makefile Dependencies
Use $(wildcard *.pub) to track file dependencies:
allowed_signers: emails.txt scripts/generate.sh $(wildcard *.pub)
3. Error Handling in Scripts
Always check exit codes and provide remediation:
output=$(command 2>&1)
exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "ERROR: $output"
echo "To fix: <remedy steps>"
exit 1
fi
4. GitHub CLI Permissions
The admin:ssh_signing_key scope is required for managing signing keys:
gh auth refresh -h github.com -s admin:ssh_signing_key
5. Verification is Separate from Signing
user.signingkeyorgpg.ssh.defaultKeyCommand- Which key to sign withgpg.ssh.allowedSignersFile- Which keys are trusted for verification
Verification
Check if commits are signed:
# Quick check
git slog -10
# Full details
git log --show-signature -1
# Specific commit
git verify-commit abc123
Public Resources
- My dotfiles - Git configuration and aliases
- bash-ftw - Bash utilities and helpers
Feel free to adapt these scripts and configurations for your own setup!
Conclusion
SSH-based commit signing is simpler than GPG, but managing it across multiple machines requires automation. With centralized configuration, automated scripts, and proper dependency tracking, you can maintain a seamless signing setup across all your machines.
The key principles:
- Automate everything - Scripts eliminate manual steps and errors
- Centralize configuration - Dotfiles repos ensure consistency
- Use cross-products - All emails × all keys for maximum flexibility
- Make it idempotent - Safe to run commands multiple times
- Provide clear errors - Always show how to fix issues
Now all my commits are automatically signed, verified, and visible on GitHub with that coveted “Verified” badge. 🎉
Signing Git commits using SSH instead of GPG
TIL you can sign Git commits using SSH instead of GPG
This tip is 🏆, learned from my really smart colleague.
tl;dr;
To configure Git to use your key:
- Configure Git to use SSH for commit signing:
git config --global gpg.format ssh - Specify which public SSH key to use as the signing key and change the
filename (
~/.ssh/examplekey.pub) to the location of your key. The filename might differ, depending on how you generated your key:
git config --global user.signingkey ~/.ssh/examplekey.pub
To sign a commit:
- Use the
-Sflag when signing your commits:
git commit -S -m "My commit msg" - Optional. If you don’t want to type the
-Sflag every time you commit, tell Git to sign your commits automatically:
git config --global commit.gpgsign true
Source: https://docs.gitlab.com/ee/user/project/repository/signed_commits/ssh.html
Embrace the power of Regex
Too often, while reviewing code, I’ll see examples like:
def extract_id_and_env(key: str) -> dict:
"""Extracts the object ID from `key`
`key` is a string like 'namespace_prefix_12345'
In some cases, `key` could also look like `namespace_prefix_12345_environment`
Returns a dict with the object ID, an integer
"""
parts = key.split('_')
parsed = {
'id': int(parts[2]),
'environment': parts[3] if len(parts) == 4 else None
}
return parsed
When I see this, I ask, “Why?”
Instead, this is my preferred way of handling this is to use a regex with named capture groups:
import re
KEY_PATTERN = re.compile(r'(?<namespace>[a-z]+)_(?<prefix>[a-z]+)_(?<object_id>\d+)(?:_(?P<environment>[a-z]+))?
def extract_key_components(key: str):
m = KEY_PATTERN.match(str)
parts = ['namespace', 'prefix', 'object_id', 'environment', ]
values = [m.group(part) for part in parts]
return values
In another example (contrived, but modified from a real world application), from a Django which serves both students and educators, and displays two different landing pages depending on the intent:
def login_view(request):
url = request.GET.get('next')
last_word = url.split("/")[-1]
is_student = True if last_word == 'scholarship' else False
template = 'login/student.html' if is_student else 'login/educator.html'
response = render_to_response(request, template)
return response
The problem with this code is not immediately apparent. It works. However, this code lacks robustness.
An arguably better approach:
import re
STUDENT_LOGIN_INTENT_PATTERNS = [
re.compile(r'^/path/to/(?P<some_id>\d+)/scholarship$'),
]
def is_login_intent_student(request):
is_student = False
next = request.GET.get('next')
for pattern in STUDENT_LOGIN_INTENT_PATTERNS:
if pattern.match(next):
is_student = True
break
return is_student
def login_view(request):
is_student = is_login_intent_student(request)
template = 'login/student.html' if is_student else 'login/educator.html'
response = render_to_response(request, template)
return response
In addition to the readability and maintainability of the regex approach, it is overall more robust, allowing the programmer to extract multiple components from the string all at once! This mitigates the need for updating the function in the future, if other parts of the string are needed later on (and it’s quite often that it would be the case!).
My preference for Regex over Split stems from:
- Somewhat related to the principle of https://www.joelonsoftware.com/2005/05/11/making-wrong-code-look-wrong/
- If code is wrong, it should fail catastrophically and loudly, not subtly or obscurely
- It’s hard to make a regex that looks maybe right? Either a regex is right, or obviously wrong. (It could also be that I have lots of experience using regexes, and can write them without lookup up references)
- OTOH, while
splitis conceptually easier to learn, for me, it’s hard or nearly impossible to see at a glance whether the code is write or wrong. For example, if you look at a block of code usingsplitand various indexes, how would you instantly detect a possible OB1 (aka off-by-one error; https://en.wikipedia.org/wiki/Off-by-one_error)? Not possible. OB1s bugs are prevalent in software because the learning curve, and therefore barrier to entry, is low, so bugs are more likely to be introduced. - Regexes, OTOH, have a slightly higher learning curve, slightly higher barrier to entry, so those who use it tend not to make trivial mistakes
- If the code never has to update ever again, then, great!
splitis sufficient. If the next engineer has to update it, they would not necessarily benefit from the existing code, and would have to re-evaluate all of the code in their head to make sure indexes are right. - Maintaining a list of patterns, or regexes, encourages a Solve for N mentality, whereas using
splitencourages a “solve it quick and dirty mindset”
Use Fully Qualified datetime in Python
Whenever using the datetime module in Python, a highly recommended
practice is to just import datetime at the top of the file, and use
the fully-qualified module name in the code, as much as possible:
datetime.datetimedatetime.timedeltadatetime.date
If one does from datetime import datetime, it’s hard to figure out
at-a-glance what datetime is referring to in the middle of a
several-hundred-lines-of-code file.
For similar reasons, another common best practice in Python when using
the typing module (https://docs.python.org/3/library/typing.html) is
to import is as import typing as T or import typing as t
(e.g. https://github.com/pallets/flask/blob/cc66213e579d6b35d9951c21b685d0078f373c44/src/flask/app.py#L7; https://github.com/pallets/werkzeug/blob/3115aa6a6276939f5fd6efa46282e0256ff21f1a/src/werkzeug/wrappers/request.py#L4)
