AgentReady API Docs
7 tools to make any website AI-ready
Introduction
AgentReady provides 7 API tools to optimize any website for AI consumption. Whether you're building an AI agent, an SEO tool, or a SaaS that interacts with LLMs, these endpoints help you reduce token costs, extract clean content, audit AI readiness, and more.
Authentication
All API requests require an API key passed as a Bearer token in the Authorization header.
curl -X POST https://agentready.cloud/api/v1/tools/tokencut \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "Hello world", "level": "standard"}'Create API keys in your Dashboard → API Keys. Keys start with ar_.
Base URL
https://agentready.cloud
All endpoints are prefixed with /api/v1/
Quick Start
Get started in 3 steps:
Sign up & create an API key
Free account includes 100 credits. Sign up →
Make your first API call
Try TokenCut — compress text before sending to any LLM.
Integrate into your pipeline
Use the Python/JS examples below or give the AI prompt to Cursor/Copilot.
Python Example
import requests
API_KEY = "ar_your_api_key_here"
BASE = "https://agentready.cloud/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
# Compress text before sending to GPT-4
r = requests.post(f"{BASE}/tools/tokencut", headers=HEADERS, json={
"text": "Your long text here...",
"level": "standard"
})
data = r.json()
compressed = data["data"]["compressed_text"]
print(f"Saved {data['data']['stats']['reduction_percent']}% tokens")JavaScript / Node.js Example
const API_KEY = "ar_your_api_key_here";
const BASE = "https://agentready.cloud/api/v1";
async function tokencut(text, level = "standard") {
const res = await fetch(`${BASE}/tools/tokencut`, {
method: "POST",
headers: { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json" },
body: JSON.stringify({ text, level }),
});
const data = await res.json();
return data.data.compressed_text;
}
// Usage: compress before sending to any LLM
const compressed = await tokencut("Your long text here...");Tools API — 7 Endpoints
Compress text before sending to GPT-4, Claude, Gemini or any LLM. Removes filler words, simplifies verbose constructions, and normalizes whitespace while preserving semantic meaning, code blocks, URLs, and numbers. The flagship tool.
Compression levels: light = whitespace normalization only. standard = + filler word removal (recommended). aggressive = + stop words + deep pruning (max savings, may lose nuance).
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| text | string | Yes | The text to compress (1–500,000 chars) |
| level | string | No | light | standard | aggressive (default: "standard") |
| preserve_code | boolean | No | Keep code blocks intact (default: true) |
| preserve_urls | boolean | No | Keep URLs intact (default: true) |
| preserve_numbers | boolean | No | Keep numerical values intact (default: true) |
| target_model | string | No | Target LLM for cost estimation (default: "gpt-4") |
Request Body
{
"text": "In order to effectively and efficiently optimize the overall performance of your application, it is absolutely essential to carefully consider the various different factors that might potentially influence the speed and responsiveness of the system.",
"level": "standard",
"target_model": "gpt-4"
}Response
{
"success": true,
"data": {
"compressed_text": "To optimize application performance, consider factors influencing system speed and responsiveness.",
"stats": {
"original_tokens": 42,
"compressed_tokens": 16,
"reduction_percent": 61.9,
"original_cost_usd": 0.00126,
"compressed_cost_usd": 0.00048,
"savings_usd": 0.00078,
"target_model": "gpt-4",
"processing_time_ms": 12
}
},
"credits_consumed": 1,
"credits_remaining": 99
}Convert any webpage to clean, LLM-ready Markdown. Strips navigation, ads, and clutter. Extracts metadata, generates table of contents, and reports token stats. Perfect for building RAG pipelines and knowledge bases.
Batch endpoint available: POST /api/v1/tools/md-converter/batch — accepts an array of URLs (max 20). Cost: 1 credit per URL.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| url | string | Yes | URL to convert to Markdown |
| remove_navigation | boolean | No | Remove nav bars and menus (default: true) |
| remove_ads | boolean | No | Remove advertisements (default: true) |
| remove_comments | boolean | No | Remove user comment sections (default: false) |
| preserve_images | boolean | No | Keep image references (default: false) |
| image_mode | string | No | url | base64 | description (default: "url") |
| extract_metadata | boolean | No | Extract page metadata (default: true) |
| include_toc | boolean | No | Generate table of contents (default: false) |
| output_format | string | No | markdown | plain_text | json (default: "markdown") |
| max_tokens | integer | No | Auto-truncate output if exceeds limit |
Request Body
{
"url": "https://example.com/blog/post-1",
"remove_navigation": true,
"remove_ads": true,
"extract_metadata": true,
"include_toc": true
}Response
{
"success": true,
"data": {
"markdown": "# Blog Post Title\n\n## Table of Contents\n- Introduction\n- Main Section\n\n## Introduction\nClean content here...",
"metadata": {
"title": "Blog Post Title",
"author": "John Doe",
"description": "A great article about...",
"language": "en"
},
"stats": {
"original_tokens": 4523,
"optimized_tokens": 1210,
"reduction_percent": 73.2,
"original_cost_usd": 0.1357,
"optimized_cost_usd": 0.0363,
"processing_time_ms": 1840
}
},
"credits_consumed": 1,
"credits_remaining": 98
}Crawl a domain and generate an AI-friendly sitemap with knowledge density scoring. Each page is scored by readability, word count, and structure. Ideal for building AI knowledge bases from entire websites.
Credit cost scales with max_pages: ≤50 = 5 credits, ≤200 = 20, ≤1000 = 50, >1000 = 100 credits.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| domain | string | Yes | Domain to crawl (e.g. "example.com") |
| max_pages | integer | No | Max pages to crawl (1–5000) (default: 100) |
| max_depth | integer | No | Max crawl depth (1–10) (default: 3) |
| exclude_patterns | string[] | No | URL patterns to exclude (default: []) |
| score_algorithm | string | No | tfidf | readability | structured_data (default: "readability") |
| include_external_links | boolean | No | Include external links in output (default: false) |
| generate_graph | boolean | No | Generate link graph/tree (default: true) |
Request Body
{
"domain": "example.com",
"max_pages": 50,
"max_depth": 3,
"score_algorithm": "readability",
"exclude_patterns": ["/admin", "/login"]
}Response
{
"success": true,
"data": {
"domain": "example.com",
"pages": [
{ "url": "/", "title": "Home", "score": 0.96, "depth": 0, "word_count": 1250 },
{ "url": "/about", "title": "About Us", "score": 0.91, "depth": 1, "word_count": 890 },
{ "url": "/blog/post-1", "title": "First Post", "score": 0.87, "depth": 2, "word_count": 2100 }
],
"tree": { "url": "/", "title": "Home", "score": 0.96, "children": [...] },
"total_pages": 47,
"avg_score": 0.82
},
"credits_consumed": 5,
"credits_remaining": 93
}Run a comprehensive audit on any webpage to measure how well it's optimized for LLMs and AI crawlers. Scores across 5 categories with detailed recommendations. Think of it as Lighthouse, but for AI.
Scoring: ≥80 = pass (green), ≥60 = warning (yellow), <60 = fail (red). The 5 categories are: structured_data, semantic_html, content_density, token_efficiency, ai_readability.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| url | string | Yes | URL to audit |
| categories | string[] | No | Categories to audit (default: all 5 categories) |
Request Body
{
"url": "https://example.com",
"categories": [
"structured_data",
"semantic_html",
"content_density",
"token_efficiency",
"ai_readability"
]
}Response
{
"success": true,
"data": {
"url": "https://example.com",
"overall_score": 72,
"status": "warning",
"categories": [
{
"name": "structured_data",
"score": 64,
"status": "warning",
"details": ["Found 1 JSON-LD schema", "Missing FAQPage schema"],
"recommendations": [
{ "action": "Add JSON-LD", "description": "Add FAQPage schema for better AI extraction" }
]
},
{ "name": "semantic_html", "score": 82, "status": "pass", "details": [...] },
{ "name": "content_density", "score": 91, "status": "pass", "details": [...] },
{ "name": "token_efficiency", "score": 53, "status": "fail", "details": [...] },
{ "name": "ai_readability", "score": 80, "status": "pass", "details": [...] }
],
"recommendations": ["Add JSON-LD structured data", "Improve heading hierarchy", "Reduce boilerplate HTML"],
"processing_time_ms": 2340
},
"credits_consumed": 2,
"credits_remaining": 96
}Analyze a page's existing structured data (JSON-LD, Open Graph) and generate Schema.org recommendations. Helps improve discoverability by AI crawlers and LLMs that rely on structured markup.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| url | string | Yes | URL to analyze |
| schema_types | string[] | No | Schema.org types to generate (default: ["Article", "Organization", "FAQPage"]) |
Request Body
{
"url": "https://example.com/blog/my-post",
"schema_types": ["Article", "FAQPage", "Organization"]
}Response
{
"success": true,
"data": {
"url": "https://example.com/blog/my-post",
"existing_json_ld": [
{ "@context": "https://schema.org", "@type": "Article", "headline": "..." }
],
"existing_og_tags": {
"og:title": "My Post",
"og:description": "A great article..."
},
"has_structured_data": true,
"recommendations": [
{
"type": "FAQPage",
"json_ld": { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [...] },
"priority": "high"
}
]
},
"credits_consumed": 1,
"credits_remaining": 98
}Analyze a domain's robots.txt to check AI bot compatibility. See which AI crawlers (GPTBot, ClaudeBot, Google-Extended, etc.) are allowed or blocked, and get recommendations to improve AI accessibility.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| domain | string | Yes | Domain to analyze (e.g. "example.com") |
| ai_bots | string[] | No | AI bots to check (default: ["GPTBot", "Google-Extended", "ClaudeBot", "Bingbot"]) |
Request Body
{
"domain": "example.com",
"ai_bots": ["GPTBot", "ClaudeBot", "Google-Extended", "Bingbot"]
}Response
{
"success": true,
"data": {
"domain": "example.com",
"robots_url": "https://example.com/robots.txt",
"has_robots_txt": true,
"content": "User-agent: *\nAllow: /\n\nUser-agent: GPTBot\nDisallow: /",
"bot_analysis": [
{ "bot": "GPTBot", "status": "blocked", "allowed": false },
{ "bot": "ClaudeBot", "status": "allowed", "allowed": true },
{ "bot": "Google-Extended", "status": "allowed", "allowed": true },
{ "bot": "Bingbot", "status": "allowed", "allowed": true }
],
"recommendations": [
{
"action": "Allow GPTBot",
"description": "GPTBot is blocked — this prevents OpenAI from indexing your content",
"example": "User-agent: GPTBot\nAllow: /",
"priority": "high"
}
],
"ai_friendly_score": 75
},
"credits_consumed": 1,
"credits_remaining": 97
}Process images for AI pipelines: resize for LLM vision models, generate descriptions, or analyze image content. Returns optimized URLs and metadata. Supports WebP, JPEG, and PNG.
Credit costs by mode: resize = 1 credit, analyze = 1 credit, describe = 3 credits (uses Vision AI).
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| image_url | string | Yes | URL of the image to process |
| mode | string | No | resize | describe | analyze (default: "resize") |
| max_width | integer | No | Max width in pixels (1–4096) (default: 512) |
| max_height | integer | No | Max height in pixels (1–4096) (default: 512) |
| format | string | No | webp | jpeg | png (default: "webp") |
| quality | integer | No | Output quality 1–100 (default: 80) |
| detail_level | string | No | low | medium | high (for describe mode) (default: "medium") |
| extract_text | boolean | No | Run OCR on the image (default: false) |
| detect_objects | boolean | No | Detect objects in the image (default: false) |
Request Body
{
"image_url": "https://example.com/photo.jpg",
"mode": "resize",
"max_width": 512,
"max_height": 512,
"format": "webp",
"quality": 80
}Response
{
"success": true,
"data": {
"original_url": "https://example.com/photo.jpg",
"original_size_kb": 245.6,
"original_format": "jpeg",
"target_width": 512,
"target_height": 384,
"target_format": "webp",
"target_quality": 80,
"estimated_savings_percent": 68,
"proxy_url": "https://agentready.cloud/proxy/img/abc123.webp",
"processing_time_ms": 340
},
"credits_consumed": 1,
"credits_remaining": 97
}Integrate into Your SaaS
Here's how to use each AgentReady tool in your application. Copy the Python or JavaScript wrapper functions below.
Python — Full Integration
import requests
from typing import Optional
class AgentReady:
"""AgentReady API client for Python."""
def __init__(self, api_key: str):
self.api_key = api_key
self.base = "https://agentready.cloud/api/v1"
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
def _post(self, endpoint: str, data: dict) -> dict:
r = requests.post(f"{self.base}{endpoint}", headers=self.headers, json=data)
r.raise_for_status()
return r.json()
# ── Tool #1: TokenCut ──
def tokencut(self, text: str, level: str = "standard") -> str:
"""Compress text to save 40-60% tokens. Returns compressed text."""
data = self._post("/tools/tokencut", {"text": text, "level": level})
return data["data"]["compressed_text"]
# ── Tool #2: MD Converter ──
def md_convert(self, url: str, **kwargs) -> dict:
"""Convert URL to clean Markdown. Returns {markdown, metadata, stats}."""
return self._post("/tools/md-converter", {"url": url, **kwargs})["data"]
# ── Tool #3: Sitemap Generator ──
def sitemap(self, domain: str, max_pages: int = 100) -> dict:
"""Crawl domain and generate sitemap with scores."""
return self._post("/tools/sitemap-generator", {
"domain": domain, "max_pages": max_pages
})["data"]
# ── Tool #4: LLMO Auditor ──
def llmo_audit(self, url: str) -> dict:
"""Audit a page for AI/LLM readiness. Returns scores + recommendations."""
return self._post("/tools/llmo-auditor", {"url": url})["data"]
# ── Tool #5: Structured Data ──
def structured_data(self, url: str) -> dict:
"""Analyze structured data and get Schema.org recommendations."""
return self._post("/tools/structured-data", {"url": url})["data"]
# ── Tool #6: Robots.txt ──
def robots_txt(self, domain: str) -> dict:
"""Analyze robots.txt for AI bot compatibility."""
return self._post("/tools/robots-txt", {"domain": domain})["data"]
# ── Tool #7: Image Proxy ──
def image_proxy(self, image_url: str, mode: str = "resize", **kwargs) -> dict:
"""Process image: resize, describe, or analyze."""
return self._post("/tools/image-proxy", {
"image_url": image_url, "mode": mode, **kwargs
})["data"]
# ─── Usage Examples ───
ar = AgentReady("ar_your_api_key_here")
# Compress text before sending to GPT-4 → save 40-60%
compressed = ar.tokencut("Your long article text here...")
# Convert a webpage to Markdown for RAG
page = ar.md_convert("https://example.com/blog/post")
print(page["markdown"])
# Crawl an entire domain for a knowledge base
sitemap = ar.sitemap("example.com", max_pages=50)
for p in sitemap["pages"]:
print(f"{p['url']} — score: {p['score']}")
# Audit your site for AI readiness
audit = ar.llmo_audit("https://yoursite.com")
print(f"Overall score: {audit['overall_score']}/100")
# Check if AI bots can crawl your site
robots = ar.robots_txt("yoursite.com")
for bot in robots["bot_analysis"]:
print(f"{bot['bot']}: {'✅' if bot['allowed'] else '❌'}")
# Analyze structured data
sd = ar.structured_data("https://yoursite.com")
print(f"Has structured data: {sd['has_structured_data']}")
# Resize image for GPT-4 Vision
img = ar.image_proxy("https://example.com/photo.jpg", mode="resize", max_width=512)
print(f"Optimized: {img['proxy_url']}")JavaScript / TypeScript — Full Integration
class AgentReady {
constructor(apiKey) {
this.apiKey = apiKey;
this.base = "https://agentready.cloud/api/v1";
}
async _post(endpoint, data) {
const res = await fetch(`${this.base}${endpoint}`, {
method: "POST",
headers: {
"Authorization": `Bearer ${this.apiKey}`,
"Content-Type": "application/json"
},
body: JSON.stringify(data),
});
if (!res.ok) throw new Error(`API error: ${res.status}`);
return res.json();
}
// Tool #1: TokenCut — compress text to save 40-60% tokens
async tokencut(text, level = "standard") {
const r = await this._post("/tools/tokencut", { text, level });
return r.data.compressed_text;
}
// Tool #2: MD Converter — URL to clean Markdown
async mdConvert(url, opts = {}) {
const r = await this._post("/tools/md-converter", { url, ...opts });
return r.data;
}
// Tool #3: Sitemap Generator — crawl domain
async sitemap(domain, maxPages = 100) {
const r = await this._post("/tools/sitemap-generator", {
domain, max_pages: maxPages
});
return r.data;
}
// Tool #4: LLMO Auditor — AI readiness audit
async llmoAudit(url) {
const r = await this._post("/tools/llmo-auditor", { url });
return r.data;
}
// Tool #5: Structured Data — analyze Schema.org
async structuredData(url) {
const r = await this._post("/tools/structured-data", { url });
return r.data;
}
// Tool #6: Robots.txt — AI bot compatibility
async robotsTxt(domain) {
const r = await this._post("/tools/robots-txt", { domain });
return r.data;
}
// Tool #7: Image Proxy — resize/describe/analyze images
async imageProxy(imageUrl, mode = "resize", opts = {}) {
const r = await this._post("/tools/image-proxy", {
image_url: imageUrl, mode, ...opts
});
return r.data;
}
}
// ─── Usage ───
const ar = new AgentReady("ar_your_api_key_here");
// Compress before sending to LLM
const compressed = await ar.tokencut("Long article text...");
// Convert page to Markdown
const page = await ar.mdConvert("https://example.com/blog/post");
console.log(page.markdown);
// Full site audit
const audit = await ar.llmoAudit("https://yoursite.com");
console.log(`Score: ${audit.overall_score}/100`);Common SaaS Use Cases
RAG Pipeline
Use MD Converter + Sitemap to crawl sites → clean Markdown → feed into vector DB → query with LLM. Pipe through TokenCut first to save 40-60%.
TokenCut + MD Converter + SitemapSEO / LLMO SaaS
Build an SEO tool that audits any URL for AI readiness. Use LLMO Auditor + Structured Data + Robots.txt for a full report.
LLMO Auditor + Structured Data + Robots.txtContent API
Offer a "convert any URL to Markdown" feature in your app. Use MD Converter with TokenCut for optimal output.
MD Converter + TokenCutAI Agent Toolkit
Give your AI agent the ability to read any webpage, analyze images, and check site accessibility. All 7 tools as agent functions.
All 7 toolsAI Prompts
Copy these prompts and paste them into Cursor, GitHub Copilot, ChatGPT, or Claude to instantly integrate AgentReady into your codebase.
Integrate AgentReady API into my project. Here are all 7 endpoints:
Base URL: https://agentready.cloud/api/v1
Auth: Bearer token in Authorization header (API key starts with ar_)
1. POST /tools/tokencut — Compress text before sending to LLMs (saves 40-60% tokens)
Body: {"text": "...", "level": "standard"} → Response: data.compressed_text, data.stats
2. POST /tools/md-converter — Convert any URL to clean Markdown
Body: {"url": "https://...", "remove_navigation": true, "remove_ads": true}
→ Response: data.markdown, data.metadata, data.stats
3. POST /tools/sitemap-generator — Crawl domain and generate scored sitemap
Body: {"domain": "example.com", "max_pages": 100} → Response: data.pages[], data.tree
4. POST /tools/llmo-auditor — Audit any page for AI readiness (score 0-100)
Body: {"url": "https://..."} → Response: data.overall_score, data.categories[], data.recommendations[]
5. POST /tools/structured-data — Analyze Schema.org markup and get recommendations
Body: {"url": "https://..."} → Response: data.existing_json_ld, data.recommendations[]
6. POST /tools/robots-txt — Check AI bot access (GPTBot, ClaudeBot, etc.)
Body: {"domain": "example.com"} → Response: data.bot_analysis[], data.ai_friendly_score
7. POST /tools/image-proxy — Resize/analyze images for AI pipelines
Body: {"image_url": "https://...", "mode": "resize", "max_width": 512} → Response: data.proxy_url
Create a wrapper class/module with a method for each tool. Add error handling and retry logic.
All responses have the shape: {success, data, credits_consumed, credits_remaining}.Build a RAG pipeline using AgentReady API that:
1. Crawls a website using POST https://agentready.cloud/api/v1/tools/sitemap-generator
Body: {"domain": "target-site.com", "max_pages": 50}
This returns a list of scored pages.
2. For each page, converts to Markdown using POST /api/v1/tools/md-converter
Body: {"url": page_url, "remove_navigation": true, "remove_ads": true}
This returns clean Markdown + token stats.
3. Compresses each Markdown with POST /api/v1/tools/tokencut
Body: {"text": markdown_content, "level": "standard"}
This saves 40-60% tokens before embedding.
4. Chunks the compressed text and stores in a vector database.
5. At query time, retrieves relevant chunks and sends to GPT-4/Claude.
Auth: Bearer token header. API key starts with ar_.
Create the full pipeline with async processing and error handling.Build an AI-SEO audit feature using AgentReady API that analyzes any URL across 3 dimensions:
1. LLMO Audit: POST https://agentready.cloud/api/v1/tools/llmo-auditor
Body: {"url": "https://target.com"}
→ Returns overall_score (0-100), 5 category scores, recommendations
2. Structured Data: POST /api/v1/tools/structured-data
Body: {"url": "https://target.com"}
→ Returns existing_json_ld, og_tags, recommendations with priority
3. Robots.txt: POST /api/v1/tools/robots-txt
Body: {"domain": "target.com"}
→ Returns bot_analysis (which AI bots can/can't crawl), ai_friendly_score
Auth: Bearer token header.
Combine all 3 results into a comprehensive AI readiness report with:
- Overall score (weighted average)
- Category breakdown with pass/warning/fail
- Prioritized action items
- Clean dashboard UI with scores and chartsCreate function definitions for OpenAI/Claude function calling using AgentReady's 7 tools. Base URL: https://agentready.cloud/api/v1, Auth: Bearer API key. Define these tools for the AI agent: 1. tokencut(text, level) → POST /tools/tokencut — compress text to save LLM tokens 2. read_webpage(url) → POST /tools/md-converter — convert URL to Markdown 3. crawl_site(domain, max_pages) → POST /tools/sitemap-generator — get all pages 4. audit_page(url) → POST /tools/llmo-auditor — AI readiness score 5. check_schema(url) → POST /tools/structured-data — analyze structured data 6. check_robots(domain) → POST /tools/robots-txt — check AI bot access 7. process_image(image_url, mode) → POST /tools/image-proxy — resize/analyze Create OpenAI-compatible tool definitions (JSON schema) and a tool execution handler. The agent should call any combination of these tools to answer queries about websites.
Account API
Credits & Billing
/api/v1/credits/balanceGet your current credit balance.
/api/v1/credits/usage?days=30Usage stats — calls_by_tool, credits_by_tool, daily_usage.
/api/v1/credits/token-statsToken statistics — tokens_passed, tokens_saved, cost_saved, savings_percent.
/api/v1/credits/tool-statsPer-tool breakdown — calls, tokens_in, tokens_out, tokens_saved per tool.
/api/v1/credits/transactions?limit=50&offset=0Credit transaction history.
API Keys
/api/v1/api-keys/List all your API keys.
/api/v1/api-keys/Create a new API key. Body: {"name": "My Key"}. ⚠️ Full key shown only once!
/api/v1/api-keys/:key_idDelete (deactivate) an API key.
/api/v1/api-keys/:key_id/regenerateRegenerate an API key. Old key is immediately invalidated.
Reference
Error Handling
All errors return a JSON object with a detail field.
| Status | Meaning |
|---|---|
| 400 | Bad Request — invalid parameters |
| 401 | Unauthorized — missing or invalid API key |
| 402 | Payment Required — insufficient credits |
| 422 | Validation Error — request body failed validation |
| 429 | Rate Limit — too many requests, slow down |
| 500 | Internal Server Error — contact support |
// Error response format
{
"detail": "Insufficient credits. Please purchase more credits to continue."
}Credit Costs
| Tool | Credits |
|---|---|
| TokenCut | 1 credit per compression |
| MD Converter | 1 credit per URL |
| MD Converter (Batch) | 1 credit per URL (max 20) |
| Sitemap Generator | 5 / 20 / 50 / 100 (based on max_pages) |
| LLMO Auditor | 2 credits per audit |
| Structured Data | 1 credit per analysis |
| Robots.txt Analyzer | 1 credit per analysis |
| Image Proxy (resize) | 1 credit |
| Image Proxy (analyze) | 1 credit |
| Image Proxy (describe) | 3 credits |
Rate Limits
During beta, all accounts get generous rate limits. These may change when pricing is introduced.
| Tier | Requests/min | Daily limit |
|---|---|---|
| Beta (current) | 60 | Unlimited |
Ready to start?
Create a free account and get 100 credits to try all 7 tools.
© 2026 AgentReady. All rights reserved.