Perplexity’s online models have built-in internet access and return grounded answers with citations. Unlike standard LLMs, they can answer questions about current events.Documentation Index
Fetch the complete documentation index at: https://docs.profclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
Supported Models
| Model | ID | Context | Web Search | Notes |
|---|---|---|---|---|
| Sonar Huge | llama-3.1-sonar-huge-128k-online | 128K | Yes | Best quality |
| Sonar Large | llama-3.1-sonar-large-128k-online | 128K | Yes | Balanced |
| Sonar Small | llama-3.1-sonar-small-128k-online | 128K | Yes | Fastest/cheapest |
Setup
Environment Variables
Your Perplexity API key. Format:
pplx-...Configuration Example
- .env
- settings.yml
Model Aliases
| Alias | Model |
|---|---|
perplexity | llama-3.1-sonar-huge-128k-online |
pplx-fast | llama-3.1-sonar-small-128k-online |
Usage Examples
Notes
- Status: Beta - Perplexity models have unique behavior (web search, citations) that differs from standard chat models.
- API endpoint:
https://api.perplexity.ai(OpenAI-compatible) - All Sonar Online models have real-time internet access built in.
- Responses include source citations automatically.
- Not recommended for tasks requiring deterministic, non-web outputs.
Related
- AI Providers Overview - Compare all 37 supported providers
- Web Search Tool - Explicit web search for any provider via Brave, Serper, or Tavily
- xAI / Grok - Another provider with real-time data access
- profclaw provider - Add and test providers from the CLI