AgentReady API Docs

7 tools to make any website AI-ready

FREE BETADashboard →

Introduction

AgentReady provides 7 API tools to optimize any website for AI consumption. Whether you're building an AI agent, an SEO tool, or a SaaS that interacts with LLMs, these endpoints help you reduce token costs, extract clean content, audit AI readiness, and more.

TokenCut
MD Converter
Sitemap Gen
LLMO Auditor
Struct. Data
Robots.txt
Image Proxy

Authentication

All API requests require an API key passed as a Bearer token in the Authorization header.

bash
curl -X POST https://agentready.cloud/api/v1/tools/tokencut \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello world", "level": "standard"}'

Create API keys in your Dashboard → API Keys. Keys start with ar_.

Base URL

text
https://agentready.cloud

All endpoints are prefixed with /api/v1/

Quick Start

Get started in 3 steps:

1

Sign up & create an API key

Free account includes 100 credits. Sign up →

2

Make your first API call

Try TokenCut — compress text before sending to any LLM.

3

Integrate into your pipeline

Use the Python/JS examples below or give the AI prompt to Cursor/Copilot.

Python Example

python
import requests

API_KEY = "ar_your_api_key_here"
BASE = "https://agentready.cloud/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}

# Compress text before sending to GPT-4
r = requests.post(f"{BASE}/tools/tokencut", headers=HEADERS, json={
    "text": "Your long text here...",
    "level": "standard"
})
data = r.json()
compressed = data["data"]["compressed_text"]
print(f"Saved {data['data']['stats']['reduction_percent']}% tokens")

JavaScript / Node.js Example

javascript
const API_KEY = "ar_your_api_key_here";
const BASE = "https://agentready.cloud/api/v1";

async function tokencut(text, level = "standard") {
  const res = await fetch(`${BASE}/tools/tokencut`, {
    method: "POST",
    headers: { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json" },
    body: JSON.stringify({ text, level }),
  });
  const data = await res.json();
  return data.data.compressed_text;
}

// Usage: compress before sending to any LLM
const compressed = await tokencut("Your long text here...");

Tools API — 7 Endpoints

Compress text before sending to GPT-4, Claude, Gemini or any LLM. Removes filler words, simplifies verbose constructions, and normalizes whitespace while preserving semantic meaning, code blocks, URLs, and numbers. The flagship tool.

Compression levels: light = whitespace normalization only. standard = + filler word removal (recommended). aggressive = + stop words + deep pruning (max savings, may lose nuance).

Parameters

NameTypeRequiredDescription
textstringYesThe text to compress (1–500,000 chars)
levelstringNolight | standard | aggressive (default: "standard")
preserve_codebooleanNoKeep code blocks intact (default: true)
preserve_urlsbooleanNoKeep URLs intact (default: true)
preserve_numbersbooleanNoKeep numerical values intact (default: true)
target_modelstringNoTarget LLM for cost estimation (default: "gpt-4")

Request Body

json
{
  "text": "In order to effectively and efficiently optimize the overall performance of your application, it is absolutely essential to carefully consider the various different factors that might potentially influence the speed and responsiveness of the system.",
  "level": "standard",
  "target_model": "gpt-4"
}

Response

json
{
  "success": true,
  "data": {
    "compressed_text": "To optimize application performance, consider factors influencing system speed and responsiveness.",
    "stats": {
      "original_tokens": 42,
      "compressed_tokens": 16,
      "reduction_percent": 61.9,
      "original_cost_usd": 0.00126,
      "compressed_cost_usd": 0.00048,
      "savings_usd": 0.00078,
      "target_model": "gpt-4",
      "processing_time_ms": 12
    }
  },
  "credits_consumed": 1,
  "credits_remaining": 99
}

Convert any webpage to clean, LLM-ready Markdown. Strips navigation, ads, and clutter. Extracts metadata, generates table of contents, and reports token stats. Perfect for building RAG pipelines and knowledge bases.

Batch endpoint available: POST /api/v1/tools/md-converter/batch — accepts an array of URLs (max 20). Cost: 1 credit per URL.

Parameters

NameTypeRequiredDescription
urlstringYesURL to convert to Markdown
remove_navigationbooleanNoRemove nav bars and menus (default: true)
remove_adsbooleanNoRemove advertisements (default: true)
remove_commentsbooleanNoRemove user comment sections (default: false)
preserve_imagesbooleanNoKeep image references (default: false)
image_modestringNourl | base64 | description (default: "url")
extract_metadatabooleanNoExtract page metadata (default: true)
include_tocbooleanNoGenerate table of contents (default: false)
output_formatstringNomarkdown | plain_text | json (default: "markdown")
max_tokensintegerNoAuto-truncate output if exceeds limit

Request Body

json
{
  "url": "https://example.com/blog/post-1",
  "remove_navigation": true,
  "remove_ads": true,
  "extract_metadata": true,
  "include_toc": true
}

Response

json
{
  "success": true,
  "data": {
    "markdown": "# Blog Post Title\n\n## Table of Contents\n- Introduction\n- Main Section\n\n## Introduction\nClean content here...",
    "metadata": {
      "title": "Blog Post Title",
      "author": "John Doe",
      "description": "A great article about...",
      "language": "en"
    },
    "stats": {
      "original_tokens": 4523,
      "optimized_tokens": 1210,
      "reduction_percent": 73.2,
      "original_cost_usd": 0.1357,
      "optimized_cost_usd": 0.0363,
      "processing_time_ms": 1840
    }
  },
  "credits_consumed": 1,
  "credits_remaining": 98
}

Crawl a domain and generate an AI-friendly sitemap with knowledge density scoring. Each page is scored by readability, word count, and structure. Ideal for building AI knowledge bases from entire websites.

Credit cost scales with max_pages: ≤50 = 5 credits, ≤200 = 20, ≤1000 = 50, >1000 = 100 credits.

Parameters

NameTypeRequiredDescription
domainstringYesDomain to crawl (e.g. "example.com")
max_pagesintegerNoMax pages to crawl (1–5000) (default: 100)
max_depthintegerNoMax crawl depth (1–10) (default: 3)
exclude_patternsstring[]NoURL patterns to exclude (default: [])
score_algorithmstringNotfidf | readability | structured_data (default: "readability")
include_external_linksbooleanNoInclude external links in output (default: false)
generate_graphbooleanNoGenerate link graph/tree (default: true)

Request Body

json
{
  "domain": "example.com",
  "max_pages": 50,
  "max_depth": 3,
  "score_algorithm": "readability",
  "exclude_patterns": ["/admin", "/login"]
}

Response

json
{
  "success": true,
  "data": {
    "domain": "example.com",
    "pages": [
      { "url": "/", "title": "Home", "score": 0.96, "depth": 0, "word_count": 1250 },
      { "url": "/about", "title": "About Us", "score": 0.91, "depth": 1, "word_count": 890 },
      { "url": "/blog/post-1", "title": "First Post", "score": 0.87, "depth": 2, "word_count": 2100 }
    ],
    "tree": { "url": "/", "title": "Home", "score": 0.96, "children": [...] },
    "total_pages": 47,
    "avg_score": 0.82
  },
  "credits_consumed": 5,
  "credits_remaining": 93
}

Run a comprehensive audit on any webpage to measure how well it's optimized for LLMs and AI crawlers. Scores across 5 categories with detailed recommendations. Think of it as Lighthouse, but for AI.

Scoring: ≥80 = pass (green), ≥60 = warning (yellow), <60 = fail (red). The 5 categories are: structured_data, semantic_html, content_density, token_efficiency, ai_readability.

Parameters

NameTypeRequiredDescription
urlstringYesURL to audit
categoriesstring[]NoCategories to audit (default: all 5 categories)

Request Body

json
{
  "url": "https://example.com",
  "categories": [
    "structured_data",
    "semantic_html",
    "content_density",
    "token_efficiency",
    "ai_readability"
  ]
}

Response

json
{
  "success": true,
  "data": {
    "url": "https://example.com",
    "overall_score": 72,
    "status": "warning",
    "categories": [
      {
        "name": "structured_data",
        "score": 64,
        "status": "warning",
        "details": ["Found 1 JSON-LD schema", "Missing FAQPage schema"],
        "recommendations": [
          { "action": "Add JSON-LD", "description": "Add FAQPage schema for better AI extraction" }
        ]
      },
      { "name": "semantic_html", "score": 82, "status": "pass", "details": [...] },
      { "name": "content_density", "score": 91, "status": "pass", "details": [...] },
      { "name": "token_efficiency", "score": 53, "status": "fail", "details": [...] },
      { "name": "ai_readability", "score": 80, "status": "pass", "details": [...] }
    ],
    "recommendations": ["Add JSON-LD structured data", "Improve heading hierarchy", "Reduce boilerplate HTML"],
    "processing_time_ms": 2340
  },
  "credits_consumed": 2,
  "credits_remaining": 96
}

Analyze a page's existing structured data (JSON-LD, Open Graph) and generate Schema.org recommendations. Helps improve discoverability by AI crawlers and LLMs that rely on structured markup.

Parameters

NameTypeRequiredDescription
urlstringYesURL to analyze
schema_typesstring[]NoSchema.org types to generate (default: ["Article", "Organization", "FAQPage"])

Request Body

json
{
  "url": "https://example.com/blog/my-post",
  "schema_types": ["Article", "FAQPage", "Organization"]
}

Response

json
{
  "success": true,
  "data": {
    "url": "https://example.com/blog/my-post",
    "existing_json_ld": [
      { "@context": "https://schema.org", "@type": "Article", "headline": "..." }
    ],
    "existing_og_tags": {
      "og:title": "My Post",
      "og:description": "A great article..."
    },
    "has_structured_data": true,
    "recommendations": [
      {
        "type": "FAQPage",
        "json_ld": { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [...] },
        "priority": "high"
      }
    ]
  },
  "credits_consumed": 1,
  "credits_remaining": 98
}

Analyze a domain's robots.txt to check AI bot compatibility. See which AI crawlers (GPTBot, ClaudeBot, Google-Extended, etc.) are allowed or blocked, and get recommendations to improve AI accessibility.

Parameters

NameTypeRequiredDescription
domainstringYesDomain to analyze (e.g. "example.com")
ai_botsstring[]NoAI bots to check (default: ["GPTBot", "Google-Extended", "ClaudeBot", "Bingbot"])

Request Body

json
{
  "domain": "example.com",
  "ai_bots": ["GPTBot", "ClaudeBot", "Google-Extended", "Bingbot"]
}

Response

json
{
  "success": true,
  "data": {
    "domain": "example.com",
    "robots_url": "https://example.com/robots.txt",
    "has_robots_txt": true,
    "content": "User-agent: *\nAllow: /\n\nUser-agent: GPTBot\nDisallow: /",
    "bot_analysis": [
      { "bot": "GPTBot", "status": "blocked", "allowed": false },
      { "bot": "ClaudeBot", "status": "allowed", "allowed": true },
      { "bot": "Google-Extended", "status": "allowed", "allowed": true },
      { "bot": "Bingbot", "status": "allowed", "allowed": true }
    ],
    "recommendations": [
      {
        "action": "Allow GPTBot",
        "description": "GPTBot is blocked — this prevents OpenAI from indexing your content",
        "example": "User-agent: GPTBot\nAllow: /",
        "priority": "high"
      }
    ],
    "ai_friendly_score": 75
  },
  "credits_consumed": 1,
  "credits_remaining": 97
}

Process images for AI pipelines: resize for LLM vision models, generate descriptions, or analyze image content. Returns optimized URLs and metadata. Supports WebP, JPEG, and PNG.

Credit costs by mode: resize = 1 credit, analyze = 1 credit, describe = 3 credits (uses Vision AI).

Parameters

NameTypeRequiredDescription
image_urlstringYesURL of the image to process
modestringNoresize | describe | analyze (default: "resize")
max_widthintegerNoMax width in pixels (1–4096) (default: 512)
max_heightintegerNoMax height in pixels (1–4096) (default: 512)
formatstringNowebp | jpeg | png (default: "webp")
qualityintegerNoOutput quality 1–100 (default: 80)
detail_levelstringNolow | medium | high (for describe mode) (default: "medium")
extract_textbooleanNoRun OCR on the image (default: false)
detect_objectsbooleanNoDetect objects in the image (default: false)

Request Body

json
{
  "image_url": "https://example.com/photo.jpg",
  "mode": "resize",
  "max_width": 512,
  "max_height": 512,
  "format": "webp",
  "quality": 80
}

Response

json
{
  "success": true,
  "data": {
    "original_url": "https://example.com/photo.jpg",
    "original_size_kb": 245.6,
    "original_format": "jpeg",
    "target_width": 512,
    "target_height": 384,
    "target_format": "webp",
    "target_quality": 80,
    "estimated_savings_percent": 68,
    "proxy_url": "https://agentready.cloud/proxy/img/abc123.webp",
    "processing_time_ms": 340
  },
  "credits_consumed": 1,
  "credits_remaining": 97
}

Integrate into Your SaaS

Here's how to use each AgentReady tool in your application. Copy the Python or JavaScript wrapper functions below.

Python — Full Integration

python
import requests
from typing import Optional

class AgentReady:
    """AgentReady API client for Python."""

    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base = "https://agentready.cloud/api/v1"
        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json",
        }

    def _post(self, endpoint: str, data: dict) -> dict:
        r = requests.post(f"{self.base}{endpoint}", headers=self.headers, json=data)
        r.raise_for_status()
        return r.json()

    # ── Tool #1: TokenCut ──
    def tokencut(self, text: str, level: str = "standard") -> str:
        """Compress text to save 40-60% tokens. Returns compressed text."""
        data = self._post("/tools/tokencut", {"text": text, "level": level})
        return data["data"]["compressed_text"]

    # ── Tool #2: MD Converter ──
    def md_convert(self, url: str, **kwargs) -> dict:
        """Convert URL to clean Markdown. Returns {markdown, metadata, stats}."""
        return self._post("/tools/md-converter", {"url": url, **kwargs})["data"]

    # ── Tool #3: Sitemap Generator ──
    def sitemap(self, domain: str, max_pages: int = 100) -> dict:
        """Crawl domain and generate sitemap with scores."""
        return self._post("/tools/sitemap-generator", {
            "domain": domain, "max_pages": max_pages
        })["data"]

    # ── Tool #4: LLMO Auditor ──
    def llmo_audit(self, url: str) -> dict:
        """Audit a page for AI/LLM readiness. Returns scores + recommendations."""
        return self._post("/tools/llmo-auditor", {"url": url})["data"]

    # ── Tool #5: Structured Data ──
    def structured_data(self, url: str) -> dict:
        """Analyze structured data and get Schema.org recommendations."""
        return self._post("/tools/structured-data", {"url": url})["data"]

    # ── Tool #6: Robots.txt ──
    def robots_txt(self, domain: str) -> dict:
        """Analyze robots.txt for AI bot compatibility."""
        return self._post("/tools/robots-txt", {"domain": domain})["data"]

    # ── Tool #7: Image Proxy ──
    def image_proxy(self, image_url: str, mode: str = "resize", **kwargs) -> dict:
        """Process image: resize, describe, or analyze."""
        return self._post("/tools/image-proxy", {
            "image_url": image_url, "mode": mode, **kwargs
        })["data"]


# ─── Usage Examples ───
ar = AgentReady("ar_your_api_key_here")

# Compress text before sending to GPT-4 → save 40-60%
compressed = ar.tokencut("Your long article text here...")

# Convert a webpage to Markdown for RAG
page = ar.md_convert("https://example.com/blog/post")
print(page["markdown"])

# Crawl an entire domain for a knowledge base
sitemap = ar.sitemap("example.com", max_pages=50)
for p in sitemap["pages"]:
    print(f"{p['url']} — score: {p['score']}")

# Audit your site for AI readiness
audit = ar.llmo_audit("https://yoursite.com")
print(f"Overall score: {audit['overall_score']}/100")

# Check if AI bots can crawl your site
robots = ar.robots_txt("yoursite.com")
for bot in robots["bot_analysis"]:
    print(f"{bot['bot']}: {'✅' if bot['allowed'] else '❌'}")

# Analyze structured data
sd = ar.structured_data("https://yoursite.com")
print(f"Has structured data: {sd['has_structured_data']}")

# Resize image for GPT-4 Vision
img = ar.image_proxy("https://example.com/photo.jpg", mode="resize", max_width=512)
print(f"Optimized: {img['proxy_url']}")

JavaScript / TypeScript — Full Integration

javascript
class AgentReady {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.base = "https://agentready.cloud/api/v1";
  }

  async _post(endpoint, data) {
    const res = await fetch(`${this.base}${endpoint}`, {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${this.apiKey}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify(data),
    });
    if (!res.ok) throw new Error(`API error: ${res.status}`);
    return res.json();
  }

  // Tool #1: TokenCut — compress text to save 40-60% tokens
  async tokencut(text, level = "standard") {
    const r = await this._post("/tools/tokencut", { text, level });
    return r.data.compressed_text;
  }

  // Tool #2: MD Converter — URL to clean Markdown
  async mdConvert(url, opts = {}) {
    const r = await this._post("/tools/md-converter", { url, ...opts });
    return r.data;
  }

  // Tool #3: Sitemap Generator — crawl domain
  async sitemap(domain, maxPages = 100) {
    const r = await this._post("/tools/sitemap-generator", {
      domain, max_pages: maxPages
    });
    return r.data;
  }

  // Tool #4: LLMO Auditor — AI readiness audit
  async llmoAudit(url) {
    const r = await this._post("/tools/llmo-auditor", { url });
    return r.data;
  }

  // Tool #5: Structured Data — analyze Schema.org
  async structuredData(url) {
    const r = await this._post("/tools/structured-data", { url });
    return r.data;
  }

  // Tool #6: Robots.txt — AI bot compatibility
  async robotsTxt(domain) {
    const r = await this._post("/tools/robots-txt", { domain });
    return r.data;
  }

  // Tool #7: Image Proxy — resize/describe/analyze images
  async imageProxy(imageUrl, mode = "resize", opts = {}) {
    const r = await this._post("/tools/image-proxy", {
      image_url: imageUrl, mode, ...opts
    });
    return r.data;
  }
}

// ─── Usage ───
const ar = new AgentReady("ar_your_api_key_here");

// Compress before sending to LLM
const compressed = await ar.tokencut("Long article text...");

// Convert page to Markdown
const page = await ar.mdConvert("https://example.com/blog/post");
console.log(page.markdown);

// Full site audit
const audit = await ar.llmoAudit("https://yoursite.com");
console.log(`Score: ${audit.overall_score}/100`);

Common SaaS Use Cases

RAG Pipeline

Use MD Converter + Sitemap to crawl sites → clean Markdown → feed into vector DB → query with LLM. Pipe through TokenCut first to save 40-60%.

TokenCut + MD Converter + Sitemap

SEO / LLMO SaaS

Build an SEO tool that audits any URL for AI readiness. Use LLMO Auditor + Structured Data + Robots.txt for a full report.

LLMO Auditor + Structured Data + Robots.txt

Content API

Offer a "convert any URL to Markdown" feature in your app. Use MD Converter with TokenCut for optimal output.

MD Converter + TokenCut

AI Agent Toolkit

Give your AI agent the ability to read any webpage, analyze images, and check site accessibility. All 7 tools as agent functions.

All 7 tools

AI Prompts

Copy these prompts and paste them into Cursor, GitHub Copilot, ChatGPT, or Claude to instantly integrate AgentReady into your codebase.

Prompt 1: Full AgentReady Integration
Integrate AgentReady API into my project. Here are all 7 endpoints:

Base URL: https://agentready.cloud/api/v1
Auth: Bearer token in Authorization header (API key starts with ar_)

1. POST /tools/tokencut — Compress text before sending to LLMs (saves 40-60% tokens)
   Body: {"text": "...", "level": "standard"} → Response: data.compressed_text, data.stats

2. POST /tools/md-converter — Convert any URL to clean Markdown
   Body: {"url": "https://...", "remove_navigation": true, "remove_ads": true}
   → Response: data.markdown, data.metadata, data.stats

3. POST /tools/sitemap-generator — Crawl domain and generate scored sitemap
   Body: {"domain": "example.com", "max_pages": 100} → Response: data.pages[], data.tree

4. POST /tools/llmo-auditor — Audit any page for AI readiness (score 0-100)
   Body: {"url": "https://..."} → Response: data.overall_score, data.categories[], data.recommendations[]

5. POST /tools/structured-data — Analyze Schema.org markup and get recommendations
   Body: {"url": "https://..."} → Response: data.existing_json_ld, data.recommendations[]

6. POST /tools/robots-txt — Check AI bot access (GPTBot, ClaudeBot, etc.)
   Body: {"domain": "example.com"} → Response: data.bot_analysis[], data.ai_friendly_score

7. POST /tools/image-proxy — Resize/analyze images for AI pipelines
   Body: {"image_url": "https://...", "mode": "resize", "max_width": 512} → Response: data.proxy_url

Create a wrapper class/module with a method for each tool. Add error handling and retry logic.
All responses have the shape: {success, data, credits_consumed, credits_remaining}.
Prompt 2: RAG Pipeline with TokenCut
Build a RAG pipeline using AgentReady API that:

1. Crawls a website using POST https://agentready.cloud/api/v1/tools/sitemap-generator
   Body: {"domain": "target-site.com", "max_pages": 50}
   This returns a list of scored pages.

2. For each page, converts to Markdown using POST /api/v1/tools/md-converter
   Body: {"url": page_url, "remove_navigation": true, "remove_ads": true}
   This returns clean Markdown + token stats.

3. Compresses each Markdown with POST /api/v1/tools/tokencut
   Body: {"text": markdown_content, "level": "standard"}
   This saves 40-60% tokens before embedding.

4. Chunks the compressed text and stores in a vector database.

5. At query time, retrieves relevant chunks and sends to GPT-4/Claude.

Auth: Bearer token header. API key starts with ar_.
Create the full pipeline with async processing and error handling.
Prompt 3: AI-SEO Audit Tool
Build an AI-SEO audit feature using AgentReady API that analyzes any URL across 3 dimensions:

1. LLMO Audit: POST https://agentready.cloud/api/v1/tools/llmo-auditor
   Body: {"url": "https://target.com"}
   → Returns overall_score (0-100), 5 category scores, recommendations

2. Structured Data: POST /api/v1/tools/structured-data
   Body: {"url": "https://target.com"}
   → Returns existing_json_ld, og_tags, recommendations with priority

3. Robots.txt: POST /api/v1/tools/robots-txt
   Body: {"domain": "target.com"}
   → Returns bot_analysis (which AI bots can/can't crawl), ai_friendly_score

Auth: Bearer token header.

Combine all 3 results into a comprehensive AI readiness report with:
- Overall score (weighted average)
- Category breakdown with pass/warning/fail
- Prioritized action items
- Clean dashboard UI with scores and charts
Prompt 4: AI Agent Function Calling
Create function definitions for OpenAI/Claude function calling using AgentReady's 7 tools.
Base URL: https://agentready.cloud/api/v1, Auth: Bearer API key.

Define these tools for the AI agent:
1. tokencut(text, level) → POST /tools/tokencut — compress text to save LLM tokens
2. read_webpage(url) → POST /tools/md-converter — convert URL to Markdown
3. crawl_site(domain, max_pages) → POST /tools/sitemap-generator — get all pages
4. audit_page(url) → POST /tools/llmo-auditor — AI readiness score
5. check_schema(url) → POST /tools/structured-data — analyze structured data
6. check_robots(domain) → POST /tools/robots-txt — check AI bot access
7. process_image(image_url, mode) → POST /tools/image-proxy — resize/analyze

Create OpenAI-compatible tool definitions (JSON schema) and a tool execution handler.
The agent should call any combination of these tools to answer queries about websites.

Account API

Credits & Billing

GET/api/v1/credits/balance

Get your current credit balance.

GET/api/v1/credits/usage?days=30

Usage stats — calls_by_tool, credits_by_tool, daily_usage.

GET/api/v1/credits/token-stats

Token statistics — tokens_passed, tokens_saved, cost_saved, savings_percent.

GET/api/v1/credits/tool-stats

Per-tool breakdown — calls, tokens_in, tokens_out, tokens_saved per tool.

GET/api/v1/credits/transactions?limit=50&offset=0

Credit transaction history.

API Keys

GET/api/v1/api-keys/

List all your API keys.

POST/api/v1/api-keys/

Create a new API key. Body: {"name": "My Key"}. ⚠️ Full key shown only once!

DELETE/api/v1/api-keys/:key_id

Delete (deactivate) an API key.

POST/api/v1/api-keys/:key_id/regenerate

Regenerate an API key. Old key is immediately invalidated.

Reference

Error Handling

All errors return a JSON object with a detail field.

StatusMeaning
400Bad Request — invalid parameters
401Unauthorized — missing or invalid API key
402Payment Required — insufficient credits
422Validation Error — request body failed validation
429Rate Limit — too many requests, slow down
500Internal Server Error — contact support
json
// Error response format
{
  "detail": "Insufficient credits. Please purchase more credits to continue."
}

Credit Costs

ToolCredits
TokenCut1 credit per compression
MD Converter1 credit per URL
MD Converter (Batch)1 credit per URL (max 20)
Sitemap Generator5 / 20 / 50 / 100 (based on max_pages)
LLMO Auditor2 credits per audit
Structured Data1 credit per analysis
Robots.txt Analyzer1 credit per analysis
Image Proxy (resize)1 credit
Image Proxy (analyze)1 credit
Image Proxy (describe)3 credits

Rate Limits

During beta, all accounts get generous rate limits. These may change when pricing is introduced.

TierRequests/minDaily limit
Beta (current)60Unlimited

Ready to start?

Create a free account and get 100 credits to try all 7 tools.

© 2026 AgentReady. All rights reserved.