← Back to Docs
Node.js SDK
Compression SDK for OpenAI. TypeScript-first, streaming support.
Installation
npm install agentready-sdk openaiMethod 1: Compress + Call (Recommended)
Compress your messages first, then call OpenAI directly:
import { compress } from 'agentready-sdk';
import OpenAI from 'openai';
// Step 1 — compress
const { messages, stats } = await compress({
apiKey: process.env.AGENTREADY_API_KEY,
messages: [{ role: 'user', content: longPrompt }],
});
// Step 2 — call OpenAI directly
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages,
});Compress with Options
import { compress } from 'agentready-sdk';
import OpenAI from 'openai';
const { messages, stats } = await compress({
apiKey: process.env.AGENTREADY_API_KEY,
messages: [{ role: 'user', content: 'Hello!' }],
level: 'medium', // light, medium, aggressive
preserveCode: true,
});
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages,
});Method 2: Batch Compression
import { compress } from 'agentready-sdk';
// Compress multiple conversations in parallel
const results = await Promise.all([
compress({ apiKey: process.env.AGENTREADY_API_KEY, messages: conversation1 }),
compress({ apiKey: process.env.AGENTREADY_API_KEY, messages: conversation2 }),
]);
// Each result contains { messages, stats }Method 3: Compression Stats
import { compress } from 'agentready-sdk';
const { messages, stats } = await compress({
apiKey: process.env.AGENTREADY_API_KEY,
messages: [{ role: 'user', content: longPrompt }],
});
console.log(stats.tokensSaved); // 1,247
console.log(stats.reductionPercent); // 52.3
console.log(stats.savingsUsd); // 0.0374Streaming
Compress first, then stream with OpenAI as usual:
import { compress } from 'agentready-sdk';
import OpenAI from 'openai';
const { messages } = await compress({
apiKey: process.env.AGENTREADY_API_KEY,
messages: [{ role: 'user', content: longPrompt }],
});
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Vercel AI SDK
import { compress } from 'agentready-sdk';
// Compress before passing to Vercel AI SDK
const { messages } = await compress({
apiKey: process.env.AGENTREADY_API_KEY,
messages: conversationMessages,
});
// Use compressed messages with your AI providerManual Compression
import { AgentReady } from 'agentready-sdk';
const ar = new AgentReady(process.env.AGENTREADY_API_KEY);
const result = await ar.compress('Your very long prompt...');
console.log(result.text); // compressed
console.log(result.tokensSaved); // 1,247
console.log(result.reductionPercent); // 52.3Pricing
Beta — Free unlimited usage. After beta: pay-per-token, ~60% less than direct API costs.