Learn how companies are using TokenDuck

It is so Qu*cking Easy

Track, analyze, and evaluate your apps AI usage. From tokens to models, we count it all — so you don't have to.

Why TokenDuck?

Analytics Dashboard

Track every request and token with real-time visibility into usage trends, costs, and bottlenecks across models, endpoints, and projects. View historical data and drill into specific queries or tokens for fine-grained insight.

Visibility from token to top-line.
Prompt Evaluation

Understand what makes a prompt succeed—or fail. Analyze completions, model behavior, and response metadata to continuously improve your LLM-driven apps. Evaluation data helps guide fine-tuning and reduce hallucinations.

Smarter prompts, stronger outcomes.
Plug & Play

Instantly connect to OpenAI, Anthropic, or your own hosted models. Just drop in your keys or route through our proxy. Built-in support for multiple providers means less wiring, more building.

Fast to set up, flexible to grow.
Secure by Design

Protect access with per-project API keys, enforce quotas, and apply granular rate limiting automatically. All data is isolated by design, and audit logs are always available for traceability and compliance.

Ship confidently with built-in security.

Easy to integrate

import openai
    
client = openai.OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),
    base_url="https://your-tokenduck-project-id.proxy.tokenduck.xyz/openai/v1",
    default_headers={"X-Api-Key": os.getenv("TOKENDUCK_API_KEY")},
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "what is 2 + 2?"}],
)