Skip to main content
AI agents use tool definitions to decide when and how to call external services. This example shows how to define Andi search as a tool that your agent can invoke to retrieve current web information.

Tool definition

Define the search tool with a description, parameters, and expected output:
search_tool = {
    "type": "function",
    "function": {
        "name": "web_search",
        "description": "Search the web for current information. Use this when the user asks about recent events, facts you're unsure about, or anything that requires up-to-date information.",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The search query",
                },
                "limit": {
                    "type": "integer",
                    "description": "Number of results (1-100, default 10)",
                },
            },
            "required": ["query"],
        },
    },
}

Tool execution

When the agent calls the tool, execute the search and return results:
import os
import json
import requests

api_key = os.environ["ANDI_API_KEY"]


def execute_web_search(query: str, limit: int = 5) -> str:
    """Execute a web search and return formatted results."""
    response = requests.get(
        "https://search-api.andisearch.com/api/v1/search",
        params={"q": query, "limit": limit, "extracts": "true"},
        headers={"x-api-key": api_key},
    )

    if response.status_code != 200:
        return json.dumps({"error": response.json().get("error", "Search failed")})

    data = response.json()
    results = []
    for r in data["results"]:
        result = {
            "title": r["title"],
            "url": r["link"],
            "description": r["desc"],
        }
        if r.get("extracts"):
            result["content"] = " ".join(r["extracts"])
        results.append(result)

    return json.dumps(results)

Putting it together

Here’s how the tool fits into an agent loop:
import openai

client = openai.OpenAI()

messages = [
    {"role": "system", "content": "You are a helpful assistant with web search access. Cite sources with URLs."},
    {"role": "user", "content": "What are the latest developments in quantum computing?"},
]

# Step 1: LLM decides to call the tool
response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=[search_tool],
)

message = response.choices[0].message

# Step 2: Execute tool calls
if message.tool_calls:
    for tool_call in message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = execute_web_search(**args)

        messages.append(message)
        messages.append({
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": result,
        })

    # Step 3: LLM generates final answer with search context
    final = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
    )
    print(final.choices[0].message.content)

Using format=context for agents

For agents that pass search results directly into conversation context, format=context returns pre-formatted markdown:
def execute_web_search_context(query: str) -> str:
    """Return search results as markdown text."""
    response = requests.get(
        "https://search-api.andisearch.com/api/v1/search",
        params={"q": query, "format": "context", "limit": 5},
        headers={"x-api-key": api_key},
    )
    return response.text
format=context reduces the code in your tool executor — no JSON parsing or formatting needed. The tradeoff is less control over result structure.

Other agent frameworks

The same pattern works with any agent framework. See Build with AI agents for MCP server integration, which lets tools like Claude Code and Cursor access the docs directly.

Next steps