Understanding RPC and MCP in Agentic AI
OTHER ARTICLES ON AI
APIs define how software talks across systems. Every click, query, or agent action depends on one. Over time, we’ve developed various methods for clients and servers to communicate. Some rely on URLs and resources. Others call specific functions directly.
Most people know REST or GraphQL, but these aren’t the only options. Remote Procedure Calls, or RPCs, work differently. Instead of requesting a web resource, the client instructs the server to run a specific function. That difference shapes how modern agents, like those in AI systems, interact with tools and data.
Agentic AI depends on fast, structured, and predictable communication. The Model Context Protocol (MCP) utilizes JSON-RPC to facilitate this. Understanding RPC helps explain why MCP functions as it does and why it is important for the next generation of AI systems.
What Is an API
An API, or Application Programming Interface, defines how two programs communicate. It is a contract between a client and a server. The client sends a request that follows the API’s rules. The server processes the requests and sends a response.
Most APIs use the client–server model. The client runs on a user’s device or another system. The server runs elsewhere, handling computation and data storage. Each request from the client is independent of the others. The server does not remember past interactions unless designed to do so.
Today, most APIs use HTTP as their transport layer. A client sends a request to a specific URL, such as https://api.example.com/users. The request includes a method like GET, POST, PUT, or DELETE. The server runs code that matches that URL and returns a response, usually in JSON format.
This model is simple but flexible. It enables different systems, such as web apps, mobile apps, or AI agents, to communicate using shared formats and predictable responses. Understanding how APIs define this request–response pattern is the first step toward understanding RPC and MCP.
Common API Standards
APIs have evolved through several standards. Each reflects a distinct approach to structuring communication between clients and servers.
SOAP came first. It uses XML to send structured messages. SOAP is strict and verbose but supports strong typing, security layers, and protocol independence. Many enterprises still use SOAP because it guarantees consistency and reliability across complex systems. Its rigidity, however, made it hard to adapt to simpler web applications.
REST replaced SOAP for most web APIs. It treats data as resources, each with its own URL. Clients interact with these resources through standard HTTP methods: GET retrieves, POST creates, PUT updates, and DELETE removes. REST’s simplicity, human readability, and use of existing web infrastructure made it the default for public APIs.
GraphQL emerged later to fix REST’s inefficiencies. Instead of multiple endpoints, GraphQL provides a single endpoint where clients define exactly what data they need. The server responds with only that data. This reduces over-fetching and under-fetching, making GraphQL ideal for complex, data-rich front ends. But it introduces a learning curve and requires a schema-driven setup.
Each model reflects a tradeoff. SOAP emphasizes control. REST emphasizes simplicity. GraphQL emphasizes flexibility. RPC takes a different path entirely — it focuses on calling remote functions instead of interacting with resources.
What Is RPC
Remote Procedure Call, or RPC, is a method for running functions on another machine as if they were local. Instead of requesting a resource at a URL, the client sends a message that names a specific function and passes parameters. The server runs that function and returns the result.
In REST, you access data through nouns, such as/users/42. In RPC, you call verbs like getUser or createUser. The client’s intent is not to fetch a resource but to perform an action. Each message defines which procedure to execute, what inputs to use, and where to send the result.
This makes RPC more direct and often faster than REST. It removes the overhead of mapping URLs to actions. The client calls a method, the server executes it, and the response arrives almost immediately. RPC is well-suited for systems where specific operations are more important than the resource structure, such as backend services or AI agents triggering predefined tools.
Because RPC calls mirror local function calls, developers can design them with clear method definitions and predictable outputs. This predictability becomes critical when agents or distributed systems must chain operations together with minimal ambiguity.
Benefits and Use Cases of RPC
RPC focuses on actions, not resources. That focus makes it fast and predictable. Each call maps to a clear function with defined inputs and outputs. This structure reduces ambiguity, which is vital in systems that depend on automation or coordination.
RPC works well in microservice architectures. Services can expose small sets of callable functions to each other, avoiding the complexity of full REST endpoints. Internal systems often favor RPC because it supports strict contracts, version control, and high performance with low latency.
Developers use RPC when clients need to perform precise operations, such as triggering a workflow, running a computation, or updating state across services. It is less suited for broad, data-centric access patterns, such as querying many related objects.
RPC’s simplicity has a tradeoff. It creates tighter coupling between client and server. Changes to method names or parameters require updates on both sides of the code. Still, in controlled environments, such as internal applications or AI systems that call known tools, RPC’s directness and speed outweigh that cost.
Message Formats
RPC depends on structured messages that both client and server can parse. The format determines how fast the system runs and how easily humans can read the data.
JSON is the most common format. It is text-based, readable, and supported in every major language. JSON works well for web and AI applications where clarity and simplicity matter more than raw speed. The tradeoff is size and performance — JSON is larger and slower to process than binary data.
Protocol Buffers (Protobuf), created by Google, use a compact binary format. Messages are smaller and faster to encode or decode. Protobuf requires predefined schemas, which adds setup effort but guarantees strong typing and consistency. It powers many high-performance RPC systems, including gRPC.
Other formats exist, such as Avro, Thrift, and MessagePack. Each balances readability, schema enforcement, and speed differently. The key idea is simple: message formats shape how RPC performs. Text formats like JSON favor transparency. Binary formats like Protobuf favor efficiency. MCP, as we’ll see, relies on JSON for accessibility and interoperability.
Modern RPC Frameworks
Modern development relies on frameworks that make RPC easier to implement and maintain. Two of the most common are gRPC and tRPC.
gRPC was developed by Google to support fast, cross-language communication. It uses HTTP/2 for transport and Protocol Buffers for message encoding. Developers define service interfaces in .proto files, which automatically generate client and server code. gRPC supports streaming, authentication, and bidirectional communication, making it ideal for microservices and high-throughput systems. Its strong typing and binary efficiency deliver low latency and consistent behavior across languages like Go, Python, Java, and C++.
tRPC is a newer framework for the TypeScript and JavaScript ecosystem. It takes advantage of TypeScript’s type system to ensure end-to-end type safety without generating separate schema files. Developers define procedures on the server, and clients gain auto-generated, type-safe functions. This simplifies development for full-stack TypeScript projects, where both client and server share a single codebase.
Both frameworks embody the same principle: direct, structured, and predictable calls between systems. gRPC prioritizes performance and interoperability. tRPC prioritizes simplicity and developer experience. These models show how RPC has evolved from a basic idea into a foundation for modern distributed systems — and how that same foundation now supports agentic AI through JSON-RPC and MCP.
JSON-RPC
JSON-RPC is a lightweight protocol for remote calls that uses JSON as its message format. It follows the same idea as other RPC systems: a client calls a named method on a server and passes parameters. The server runs the process and returns a result.
A typical request includes four fields:
For example:
{ “jsonrpc”: “2.0”, “method”: “subtract”, “params”: [42, 23], “id”: 1 }
The server might respond:
{ “jsonrpc”: “2.0”, “result”: 19, “id”: 1 }
JSON-RPC can send multiple requests simultaneously or notifications that do not require a reply. It is transport-agnostic and can run over HTTP, WebSocket, or even raw TCP.
Because it is simple, stateless, and human-readable, JSON-RPC fits well for AI agents and internal tools. It avoids the overhead of REST while remaining easy to debug and extend. These qualities formed the foundation for the Model Context Protocol, which adapts JSON-RPC for agent-based communication.
The Model Context Protocol (MCP)
The Model Context Protocol, or MCP, defines how AI agents communicate with external tools and data sources. It extends JSON-RPC to create a structured, reliable framework for agent–server interaction.
In an MCP setup, the agent acts as the client. It sends JSON-RPC messages to a server that exposes available tools and resources. The server responds with structured data describing those tools, their input formats, and their expected outputs. The agent can then call these tools through standard JSON-RPC requests.
MCP adds several layers on top of JSON-RPC. It defines standard method names, such as tools/list, tools/call, and resources/read. It supports bidirectional communication, enabling both parties to initiate requests. It includes schemas to validate parameters and results, ensuring that agents call tools safely and predictably.
This structure enables AI agents to discover, invoke, and coordinate tools with minimal manual setup. JSON-RPC provides the transport. MCP provides the rules that make those interactions consistent. In agentic systems, where reliability and clarity are as important as intelligence, this combination enables scalable and secure cooperation between models, services, and data systems.
Why MCP Uses JSON-RPC
MCP builds on JSON-RPC because it fits the needs of agent communication. JSON-RPC is a lightweight, stateless, and transport-agnostic protocol. It works over any channel where structured text can be exchanged between the client and server. That flexibility enables MCP to operate across local sockets, web connections, or embedded environments without requiring special adaptations.
JSON-RPC’s message structure is simple but expressive. Each call declares a method name, a set of parameters, and an identifier. That pattern matches how agents trigger tools: call a defined function, pass arguments, and wait for results. The agent does not need to manage URLs, query strings, or other web details.
JSON-RPC also supports notifications and batching. These features enable agents to run multiple tool calls simultaneously or send updates that don’t require responses. MCP builds on these behaviors to handle multi-step workflows and streaming updates.
By using JSON-RPC, MCP avoids reinventing transport or serialization logic. It inherits a proven standard and layers an AI-specific structure on top. The result is a clear, consistent way for agents to discover tools, invoke them, and share context safely — a practical bridge between human-readable protocols and machine-driven intelligence.
RPC and the Future of Agentic AI
Agentic AI depends on coordination. Models need to call tools, share results, and make decisions based on live data. RPC gives them a precise way to do that. It turns abstract actions into structured function calls that can run across machines and environments.
MCP utilizes RPC to provide agents with a common language. Each tool becomes a callable method with clear inputs and outputs. Each agent can act as both a client and a server, creating networks of cooperating systems instead of isolated models.
This design scales naturally. Multiple agents can discover tools, exchange capabilities, and delegate tasks through the same JSON-RPC framework. Developers can extend MCP without breaking compatibility simply by defining new methods under a shared contract.
In this way, RPC becomes more than a transport mechanism — it becomes the backbone of digital reasoning. It connects models to the real world through structured action. Understanding RPC helps explain not just how MCP works, but how the next generation of intelligent systems will communicate and evolve.
Closing Thoughts
APIs started as simple bridges between programs. Over time, they evolved from structured XML messages to flexible JSON requests and now to the precise, function-driven calls of RPC. Each step reduced friction and increased clarity in how systems interact.
RPC brings that clarity to AI. It allows agents to treat external capabilities as callable functions, not distant endpoints. JSON-RPC makes this communication simple, portable, and easy to reason about. MCP builds on that base to give agentic systems a shared protocol for discovery, execution, and trust.
Understanding RPC is more than a technical background; it reveals why MCP feels natural to developers and agents alike. Both rely on the same principle: predictable requests, structured responses, and shared understanding. That foundation will guide how AI systems collaborate, scale, and reason in the years ahead.