Connect OpenClaw
Save 40-60% on every LLM call through OpenClaw
OpenClaw is a self-hosted gateway that connects chat apps (WhatsApp, Telegram, Discord) to AI coding agents. AgentReady works as a custom model provider — every message your agent processes gets compressed automatically, saving 40-60% on tokens.
How It Works
Chat normally
Message your agent on WhatsApp, Telegram, Discord — nothing changes
Auto-compressed
AgentReady compresses every prompt before it hits the LLM. Same meaning, fewer tokens
Save 40-60%
Your Anthropic/OpenAI bill drops by half. ~5ms overhead, 0.4% accuracy delta
Prerequisites
- OpenClaw installed and running (
openclaw gateway) - An AgentReady API key — get one free in 30 seconds
- An OpenAI API key (you probably already have this)
Add AgentReady as a Custom Provider
Add this to your ~/.openclaw/openclaw.json. OpenClaw's hot-reload will pick it up — no restart needed.
{
models: {
providers: {
agentready: {
baseUrl: "https://agentready.cloud/v1",
apiKey: "ak_your_agentready_key",
api: "openai-completions",
headers: { "X-Upstream-API-Key": "${OPENAI_API_KEY}" },
models: [
{
id: "gpt-4o",
name: "GPT-4o via AgentReady (−60% tokens)",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
},
],
},
},
},
agents: {
defaults: {
model: { primary: "agentready/gpt-4o" },
},
},
}That's It — Test It
Send a message to your agent through any channel. Check the OpenClaw Control UI — you'll see requests going through agentready/gpt-4o. Your prompts are now automatically compressed.
💡 Tip: Use /model agentready/gpt-4o in any chat to switch to AgentReady on-the-fly, or set it as default in the config above.
Advanced: Multiple Models + Aliases
Route multiple OpenAI models through AgentReady with short aliases:
{
models: {
providers: {
agentready: {
baseUrl: "https://agentready.cloud/v1",
apiKey: "ak_your_agentready_key",
api: "openai-completions",
headers: { "X-Upstream-API-Key": "${OPENAI_API_KEY}" },
models: [
{
id: "gpt-4o",
name: "GPT-4o via AgentReady",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
},
{
id: "gpt-4o-mini",
name: "GPT-4o Mini via AgentReady",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
},
{
id: "gpt-4.1",
name: "GPT-4.1 via AgentReady",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 32768,
},
],
},
},
},
agents: {
defaults: {
model: {
primary: "agentready/gpt-4o",
fallbacks: ["agentready/gpt-4o-mini"],
},
models: {
"agentready/gpt-4o": { alias: "ar" },
"agentready/gpt-4o-mini": { alias: "ar-mini" },
},
},
},
}Now you can type /model ar or /model ar-mini in chat to switch.
Keep Keys Safe with Env Vars
Don't hardcode keys. Use ~/.openclaw/.env + variable substitution:
# Add to ~/.openclaw/.env
AGENTREADY_API_KEY=ak_your_agentready_key
OPENAI_API_KEY=sk-your_openai_key{
env: {
AGENTREADY_API_KEY: "ak_your_agentready_key",
},
models: {
providers: {
agentready: {
baseUrl: "https://agentready.cloud/v1",
apiKey: "${AGENTREADY_API_KEY}",
api: "openai-completions",
headers: { "X-Upstream-API-Key": "${OPENAI_API_KEY}" },
models: [
{
id: "gpt-4o",
name: "GPT-4o via AgentReady (−60% tokens)",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
},
],
},
},
},
agents: {
defaults: {
model: { primary: "agentready/gpt-4o" },
},
},
}40-60% saved
~5ms overhead
0.4% accuracy Δ
Hot reload
Ready to start?
Get your free API key and paste the config — takes 30 seconds.
FAQ
Does it work with Anthropic models too?
AgentReady currently proxies OpenAI-compatible endpoints. For Anthropic models, the compression works on the prompt text — just route your OpenAI calls through AgentReady while keeping Anthropic direct.
Will it slow down my agent?
No. AgentReady adds ~5ms per request. On a typical GPT-4o call that takes 2-10 seconds, this is imperceptible.
Is it free?
Yes — AgentReady is completely free during the open beta. No credit card required, no usage limits.
Do I need to restart OpenClaw?
No. OpenClaw hot-reloads model and provider config changes. Just save the file and it picks up the new provider automatically.