AI Chat · Pro and Business

An AI that knows your whole API — endpoint by endpoint.

Stop answering the same questions. A floating Ask AI button on every public docs site opens a chat trained on your spec. Visitors get straight answers with real code examples — and every endpoint citation is a clickable pill that jumps the docs to that exact endpoint.

Pro and BusinessGrounded in your spec — no hallucinations, no off-topic chatter.
acme.docs.outworx.io
GET/v1/users
Ask AIv1
How do I paginate /v1/users?

Use the page and per_page query params on listUsers. Max 100 per page.

curl -X GET 'https://api.acme.io/v1/users?page=2
&per_page=50'

The response includes a meta.pagination object with total_pages.

Ask anything about this API…
Business · 12,480 / 200,000 tokens today

< 800ms

First-token latency

50+

Endpoints in inline context

200k

Tokens / day on Business

100%

Grounded — citations only

Grounded retrieval

Three stages, then an answer with citations.

Up to ~50 endpoints, the entire spec is passed inline. Beyond that, a pgvector retriever pulls only the operations the question needs. Either way, the model only sees your spec — and is told to admit when something isn't documented rather than guess.

  1. 1

    Visitor asks

    “How do I paginate /v1/users?” The drawer streams the question into the chat panel and starts a conversation in the visitor's localStorage session.

  2. 2

    Retriever scopes

    For specs ≤50 endpoints the whole document goes inline. Larger specs go through a pgvector top-k retriever scoped to the project's spec embeddings.

  3. 3

    Answer + citations

    The model emits structured citation tokens for every endpoint it references. The frontend renders them as clickable pills that scroll the docs page directly to that endpoint.

Deep-link citations

Every reference is a clickable pill.

The model emits structured citation tokens for every endpoint it references. The frontend renders them as small rounded pills using the project's accent colour, and clicking one scrolls the docs body to that operation — visitors never leave context.

  • Visitor never leaves the docs. The pill scrolls in-page, not to a new tab.
  • Citations match operationId. Re-rendering the spec moves them; the model gets fresh identifiers on every page load.
  • Code blocks copy with one click. cURL / Fetch / Python all on the same chat-message bar.

Use the listUsers endpoint. Pass page and per_page.

↓ click pill → scrolls docs to the endpoint
GET/v1/users

List users

Returns a paginated list. page and per_page are query parameters; max 100 per page.

Conversation analytics

The questions visitors actually ask.

Every conversation lands in the project's Chat Activity tab — the highest-signal feedback your docs will ever get. It tells you which endpoint descriptions are too vague, which auth flow confuses people, and which migration path still hasn't been documented.

Ticket deflection, measured

See the queries support staff would otherwise be answering. Drop a top-asked question into your docs and the next visitor finds it themselves.

Streaming + persistent

Streamed completions, per-visitor session in localStorage, copy-to-clipboard on every code block. Token-budget readout in the drawer footer.

Per-turn token accounting

Daily caps with an 8-message sliding window. Free 0, Pro 25k, Business 200k. Cap resets at midnight UTC; hitting it shows a polite throttle, not a 500.

Plan tiers

Daily-token budgets you can predict.

Free

0 tokens / day

  • AI search (5 queries / month)
  • No floating chat drawer
  • Upgrade to enable
Most popular

Pro

25,000 / day

  • Floating chat on every docs site
  • Streaming completions
  • Citations + deep-links
  • Conversation analytics
  • 8-message sliding window

Business

200,000 / day

  • Everything in Pro
  • 8× the daily token cap
  • Custom domain on the docs site
  • Priority support

The chat that knows your spec.

Pro plan flips the floating Ask AI drawer on every public docs site. 25k tokens/day at $9/month, 200k on Business.