← Back to Docs

Node.js SDK

Drop-in proxy for OpenAI. TypeScript-first, streaming support.

Installation

npm install agentready-sdk openai

Method 1: Drop-in Proxy (Recommended)

Just swap your baseURL. Zero code changes:

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://agentready.cloud/v1',  // ← only change
  apiKey: 'ak_...',                         // your AgentReady key
  defaultHeaders: {
    'X-Upstream-API-Key': 'sk-...',         // your OpenAI key
  },
});

// Everything works exactly like before — but 40-60% cheaper
const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: longPrompt }],
});

Helper Function

import { createClient } from 'agentready-sdk';

const client = createClient({
  apiKey: 'ak_...',
  upstreamKey: 'sk-...',
});

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Method 2: Patch Existing Client

import OpenAI from 'openai';
import { patchOpenAI } from 'agentready-sdk';

const client = new OpenAI({ apiKey: 'sk-...' });
patchOpenAI(client, { apiKey: 'ak_...' });
// client now routes through AgentReady automatically

Method 3: Wrap with Proxy

import { AgentReady } from 'agentready-sdk';

const ar = new AgentReady('ak_...');
const client = ar.wrapOpenAI(new OpenAI());
// All messages compressed before sending

Streaming

Full streaming support — works exactly like OpenAI:

const stream = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: longPrompt }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Vercel AI SDK

import { agentReadyMiddleware } from 'agentready-sdk';

const config = agentReadyMiddleware({
  apiKey: 'ak_...',
  upstreamKey: 'sk-...',
});
// Use config.baseURL and config.headers with your AI provider

Manual Compression

import { AgentReady } from 'agentready-sdk';

const ar = new AgentReady('ak_...');
const result = await ar.compress('Your very long prompt...');
console.log(result.text);             // compressed
console.log(result.tokensSaved);      // 1,247
console.log(result.reductionPercent); // 52.3

Pricing

Beta — Free unlimited usage. After beta: pay-per-token, ~60% less than direct API costs.