The focus is on autonomous agents or code, not UI-based coding agents like Cursor or Zed.
Of MCP's three primitives (prompts, resources, tool calling), prompts and resources are less relevant for this use case.
Tool calling is highlighted as especially valuable and more complex than just using OpenAPI.
MCP supports dynamic tools, logging during execution, sampling (a powerful but confusingly named capability), tracing, and running as a subprocess over standard I/O.
MCP enables agents and tools to be composed agnostically, similar to how browsers and websites interact using open protocols.
An MCP server is set up using Fast MCP, registering the BigQuery tool for PyPI downloads.
Tool descriptions (from docstrings) are presented to the LLM for selection.
Offloading SQL generation and related instructions to the tool (rather than the central agent) keeps the main agent’s context window smaller and more manageable.
The server runs over standard IO, easily integrating with agent frameworks.
The agent can answer questions like “How many downloads did Pydantic have this year?” with real data (e.g., 1.6 billion downloads).
Observability through Logfire allows inspection of the full workflow, from outer agent’s prompt to tool decision, SQL query generation, and final results.
The system shows chaining of client-server-client calls, model usage, and returns structured XML-ish query results to the LLM for natural language output.
Actual SQL generated by the agent can be inspected for correctness and transparency.