How it works
- User asks a question
- Your app searches the web via the Andi API
- Search results become context for the LLM prompt
- The LLM generates an answer grounded in those results
Complete example
Using format=context
For simpler RAG setups, use format=context to get results pre-formatted as markdown. This skips the manual formatting step:
Using deep search for RAG
For research-heavy queries, deep search provides broader source coverage and spell correction:Deep search takes ~2-3 seconds vs ~1 second for fast search. Use it when answer quality matters more than latency.
Tips for better RAG results
- Use
extracts=trueto get longer text passages beyond the shortdescfield - Set
limit=5tolimit=10— more results give the LLM more context to draw from, but too many can dilute relevance - Include source URLs in the prompt so the LLM can cite them
- Use
includeDomainsto restrict to authoritative sources for domain-specific questions - Tell the LLM to say “I don’t know” when the search results don’t contain the answer
Next steps
AI agent tool
Define Andi search as a tool for an AI agent.
Research assistant
Multi-query search with result aggregation.
Response format
Response structure and result types.
Deep search
When to use deep vs fast search.

