The Model Context Protocol — What It Is and When to Reach for It
With MCP now donated to the Linux Foundation and 10,000+ public servers in existence, it is increasingly the default choice for connecting Claude to external data sources and tools. But many developers still reach for bare API tool definitions when MCP would serve them better — or vice versa. The key distinction is scope: if you are building a one-off integration for a single application, raw tool definitions in your API call are simpler. If you are building a data connector that should be reusable across multiple models, applications, or teams, MCP is the right level of abstraction. MCP servers are independently deployable services that expose a standard interface; any MCP-compatible client can consume them without modification.
When MCP is the right choice
- Reusable connectors: If you want the same database, API, or file-system access to be available to Claude Desktop, your custom Claude application, and a future agentic tool, write it as an MCP server once.
- Sharing within a team: An MCP server can be hosted on a shared endpoint. Every developer on the team connects their Claude environment to it without each maintaining their own integration code.
- Standardised authentication: MCP handles transport and auth negotiation at the protocol level, so individual tools within a server do not each need to re-implement OAuth or API key management.
- Managed remote servers: Cloud providers are beginning to offer fully managed MCP servers (Google launched Maps and BigQuery MCP servers this week). Consuming one is as simple as adding the endpoint to your MCP configuration — no hosting required.
Use bare tool definitions for throw-away scripts, one-off prototypes, and applications where the toolset is tightly coupled to the specific application. Use MCP when the integration should outlive a single project or needs to be shared.