Welcome to AnswerrAI

The AnswerrAI API provides unified access to the world’s leading AI models through a single, elegant interface. Instead of managing multiple API keys, learning different request formats, and juggling various providers, you get one endpoint that connects you to OpenAI, Anthropic Claude, Google Gemini, and more. Whether you’re building a chatbot, analyzing documents, or creating the next breakthrough AI application, AnswerrAI eliminates the complexity of multi-provider integration while unlocking advanced features like intelligent model selection and side-by-side comparisons.

Quick Start

1. Get Your API Key

Sign up at connect.answerr.ai and grab your API key from the dashboard. You’ll need this for all requests.

2. Make Your First Request

curl -X POST https://connectapi.answerr.ai/functions/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "Hello, world!"
      }
    ]
  }'

3. Get Your Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm here to help you with any questions or tasks you have. What would you like to know or discuss today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 28,
    "total_tokens": 37
  }
}
That’s it! You just made your first AI request. The same endpoint works with any model in our ecosystem.

Base URL

All API requests use this base URL:
https://connectapi.answerr.ai/functions/v1/chat

Choosing Your AI Model

We’ve simplified model selection while giving you maximum flexibility. Access the latest and legacy models from:
NameModel Name
GPT-4.1gpt-4.1
GPT-4.1 minigpt-4.1-mini
GPT-4.1 nanogpt-4.1-nano
OpenAI o3o3
OpenAI o4-minio4-mini
GPT-4ogpt-4o
GPT-4o minio3-mini
Claude Sonnet 3.7claude-3-7-sonnet-latest
Claude Opus 4claude-opus-4-20250514
Claude Sonnet 4claude-sonnet-4-20250514
Claude Haiku 3.5claude-3-5-haiku-latest
Gemini 2.5 Progemini-2.5-pro
Gemini 2.5 Flashgemini-2.5-flash
Gemini 2.0 Flashgemini-2.0-flash
Gemini 2.0 Flash-Litegemini-2.0-flash-lite
Simply specify your model of choice in the request. No need to maintain multiple API keys or learn different request formats.
// OpenAI's GPT-4
{ "model": "gpt-4o", "messages": [...] }

// Anthropic's Claude
{ "model": "claude-3-5-sonnet", "messages": [...] }

// Google's Gemini
{ "model": "gemini-pro", "messages": [...] }

// Let AI choose the best model
{ "model": "auto", "messages": [...] }
Each model has unique strengths - OpenAI excels at coding and structured outputs, Claude shines with nuanced understanding of complex instructions, and Gemini offers exceptional reasoning capabilities.

Streaming Responses

Get responses as they’re generated for better user experience:
const response = await fetch('https://connectapi.answerr.ai/functions/v1/chat', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${YOUR_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'claude-3-5-sonnet',
    messages: [{ role: 'user', content: 'Write a short story about AI' }],
    stream: true  // Enable streaming
  })
});

// Process the stream
const reader = response.body.getReader();
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  // Handle each chunk of the response
  console.log(new TextDecoder().decode(value));
}

Advanced Features

Auto-Mode: Let AI Choose

Don’t know which model is best for your task? Set model: "auto" and our intelligent routing will select the optimal model based on your query characteristics:
{
  "model": "auto",
  "messages": [
    {
      "role": "user", 
      "content": "Debug this Python function and explain what's wrong"
    }
  ]
}

Compare Mode: See the Difference

Want to compare how different models handle the same request? Add a compareModel parameter:
{
  "model": "gpt-4o",
  "compareModel": "claude-3-5-sonnet",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ]
}

Enhanced Responses

Add context and follow-up suggestions to your responses:
{
  "model": "gpt-4o",
  "messages": [...],
  "includeSources": true,      // Get web sources that support the response
  "includeSuggestions": true   // Get suggested follow-up questions
}

What’s Next?

Now that you’ve made your first request, explore the full capabilities:
  • Chat Completions - Deep dive into the core API
  • Auto-Mode - Learn about intelligent model selection
  • Compare Mode - Compare models side-by-side
  • File Analysis - Analyze documents, images, and code
  • Sources & Suggestions - Enhanced responses with context