Connect OpenClaw
Save 40-60% on every LLM call through OpenClaw
OpenClaw is a self-hosted gateway that connects chat apps (WhatsApp, Telegram, Discord) to AI coding agents. AgentReady's compress API reduces your token costs by 40-60%. Compress messages before sending them to any LLM — same meaning, fewer tokens.
How It Works
Call compress API
Send messages to the AgentReady compress endpoint before calling your LLM
Compressed output
Get back compressed messages with same meaning, 40-60% fewer tokens
Save 40-60%
Your Anthropic/OpenAI bill drops by half. ~5ms overhead, 0.4% accuracy delta
Prerequisites
- OpenClaw installed and running (
openclaw gateway) - An AgentReady API key — get one free in 30 seconds
- An OpenAI API key (you probably already have this)
Compress Messages with the Python SDK
Install the SDK with pip install agentready, then compress messages before passing them to your LLM.
# Python — two-step compress pattern
import agentready
from openai import OpenAI
# Step 1: Compress with AgentReady
result = agentready.compress(
api_key="ak_your_agentready_key",
messages=[{"role": "user", "content": "your prompt here..."}]
)
# Step 2: Call OpenAI directly with compressed messages
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
response = client.chat.completions.create(
model="gpt-4o",
messages=result["messages"]
)That's It — Test It
Run a test request. Compare the token count before and after compression — you'll see 40-60% savings. Your prompts keep the same meaning with fewer tokens.
💡 Tip: Install SDKs with pip install agentready (Python) or npm install agentready (Node.js).
Node.js / TypeScript SDK
Install with npm install agentready and use the same two-step pattern:
// Node.js / TypeScript — two-step compress pattern
import { compress } from 'agentready';
import OpenAI from 'openai';
// Step 1: Compress with AgentReady
const { messages } = await compress({
apiKey: 'ak_your_agentready_key',
messages: [{ role: 'user', content: 'your prompt here...' }]
});
// Step 2: Call OpenAI directly with compressed messages
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const res = await client.chat.completions.create({
model: 'gpt-4o',
messages
});Works with any model — just change the model string in the create() call.
Keep Keys Safe with Env Vars
Don't hardcode keys. Use environment variables and the compress API directly:
# Add to your .env file
AGENTREADY_API_KEY=ak_your_agentready_key
OPENAI_API_KEY=sk-your_openai_key# cURL — compress API
curl -X POST https://agentready.cloud/api/v1/compress \
-H "Authorization: Bearer $AGENTREADY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "your prompt here..."}
]
}'
# Response includes compressed messages — pass them to any LLM40-60% saved
~5ms overhead
0.4% accuracy Δ
Hot reload
Ready to start?
Get your free API key and paste the config — takes 30 seconds.
FAQ
Does it work with Anthropic models too?
Yes! The compress API works with any LLM. Compress your messages with AgentReady first, then send them to OpenAI, Anthropic, or any provider.
Will it slow down my agent?
No. AgentReady adds ~5ms per request. On a typical GPT-4o call that takes 2-10 seconds, this is imperceptible.
Is it free?
Yes — AgentReady is completely free during the open beta. No credit card required, no usage limits.
Does this work with OpenClaw directly?
Native OpenClaw integration is coming soon. For now, use the AgentReady SDK to compress messages in your custom OpenClaw handlers or scripts.