How MCP Works: Clients, Servers, Tools, and the Request Lifecycle

6 min read

MCP has three moving parts: a client (your AI app), a server (the integration), and the protocol connecting them. Understanding how they interact helps you choose servers wisely and troubleshoot problems when they arise.

Clients and servers

An MCP client is an AI application — Claude Desktop, Cursor, Windsurf, etc. A client can connect to multiple MCP servers simultaneously. Each server is a separate process that exposes a set of capabilities. When the client starts, it initializes connections to all configured servers and receives a list of their capabilities.

Tools

A tool is a function the AI can call. Each tool has a name, a description (which the AI reads to decide when to use it), and a JSON schema describing its input parameters. When the AI decides a tool is useful, it sends a tool-call request; the server executes it and returns a result. Examples: execute_query, read_file, search_web.

Resources

Resources are read-only data endpoints — think of them like files or API responses the AI can read without executing code. A resource has a URI scheme (e.g., file:// or postgres://) and returns structured content. Resources are less commonly used than tools today, but they're part of the spec.

Transports

MCP supports two transports: stdio (the server reads/writes JSON-RPC messages on stdin/stdout — common for local servers) and HTTP SSE (server-sent events over HTTP — used for remote/hosted servers). Most servers you'll install locally use stdio.

The request lifecycle

1. You ask your AI a question. 2. The AI determines a tool call is needed. 3. The client sends a JSON-RPC request to the server process. 4. The server executes the tool (queries the DB, reads the file, etc.). 5. The server returns the result. 6. The AI reads the result and continues composing its response. This all happens in milliseconds.

Security model

By default, MCP servers run as local processes you start. They inherit only the permissions you configure (API keys, DB credentials). The AI client cannot call a server's tool without the server being listed in your config file — so you're always in control of what the AI can access.

Frequently Asked Questions

Can one AI message trigger multiple tool calls?

Yes. An AI can chain tool calls within a single response — for example, querying a database and then fetching a related file. The number of round-trips depends on the client implementation.

What language are MCP servers written in?

Most servers are written in TypeScript (Node.js) or Python. The MCP SDK exists for both. Community servers also exist in Go, Rust, and other languages.

How does the AI know which tool to use?

Each tool's description is included in the AI's context. The AI uses those descriptions to decide which tool — if any — is appropriate for a given question. Good tool descriptions lead to better tool selection.

Ready to install your first server?

Browse MCP Servers