A2A, MCP, and the Agentic AI Foundation: The Protocols Shaping Agent Interoperability
Google's A2A, Anthropic's MCP, and OpenAI's AGENTS.md are converging under the Linux Foundation. Here is what each protocol does and where trust fits in.
In December 2025, something unprecedented happened: OpenAI, Anthropic, and Google jointly announced the Agentic AI Foundation (AAIF) under the Linux Foundation. Three companies that compete fiercely on model capabilities agreed to collaborate on infrastructure.
The reason is simple. Agent interoperability is a coordination problem, not a competitive advantage. No single company benefits from a fragmented ecosystem where agents from different providers cannot talk to each other.
Three protocols sit at the core of this effort. Each solves a different layer of the agent interoperability stack.
MCP: The Tool Layer
Anthropic's Model Context Protocol, released in late 2024, defines how AI agents connect to external tools and data sources. Think of MCP as a universal adapter between an LLM and the outside world.
Before MCP, every integration was custom. Want your agent to query a database? Write a custom function. Want it to call an API? Write another one. MCP standardizes this into a server-client pattern where tool providers publish MCP servers, and any MCP-compatible agent can discover and use them.
Over 10,000 MCP servers have been deployed as of early 2026. Major products including Claude, ChatGPT, Cursor, VS Code, Gemini, and GitHub Copilot support the protocol.
MCP operates at the tool level. It answers the question: "What can this agent do?"
A2A: The Communication Layer
Google's Agent2Agent protocol, introduced in April 2025, operates one level higher. While MCP connects agents to tools, A2A connects agents to other agents.
The core abstraction is the Agent Card, a JSON document that advertises an agent's identity, capabilities, authentication requirements, and supported interaction modes. When Agent A needs to find a specialist for a task, it reads Agent Cards to identify candidates, then initiates a structured task lifecycle.
A2A defines tasks as first-class objects with states (submitted, working, completed, failed) and supports both synchronous and streaming interactions. Version 0.3, released July 2025, added gRPC support, signed security cards, and extended SDK coverage.
The protocol now counts over 150 supporting organizations.
A2A operates at the communication level. It answers the question: "How do agents talk to each other?"
AGENTS.md: The Instruction Layer
OpenAI's contribution is more subtle but equally important. AGENTS.md is a specification for providing project-specific instructions to AI coding agents. It is a structured file (similar to a README) that tells agents how to work within a particular codebase or project.
Since its release in August 2025, over 60,000 open-source projects have adopted AGENTS.md files. Frameworks including Cursor, Devin, Gemini CLI, GitHub Copilot, and Jules all read them.
AGENTS.md operates at the instruction level. It answers the question: "How should an agent behave in this context?"
The Missing Layer: Trust
These three protocols cover tools, communication, and instructions. But none of them address a critical question: should you trust this agent?
MCP tells you what an agent can do. A2A tells you how to reach it. AGENTS.md tells you how it should behave. But none of them tell you whether it actually delivers on its promises.
Consider a real scenario. Your orchestrator agent needs to delegate a data analysis task. It discovers three candidate agents via A2A Agent Cards. All three claim to support the required tools via MCP. All three have AGENTS.md files describing their behavior.
How do you choose? Without a trust layer, you are guessing.
This is the gap that behavioral contracts and trust scoring fill. A PactScore tells you how reliably an agent has performed across hundreds or thousands of prior interactions. PactTerms define exactly what the agent commits to. PactEscrow backs those commitments with real value.
How They Fit Together
The full stack looks like this:
| Layer | Protocol | Question Answered |
|---|---|---|
| Tools | MCP | What can this agent do? |
| Communication | A2A | How do agents talk? |
| Instructions | AGENTS.md | How should it behave here? |
| Trust | PactScore + PactTerms | Should I trust this agent? |
| Accountability | PactEscrow | What happens if it fails? |
Interoperability without trust is just connectivity. The protocols being standardized at AAIF create the plumbing. Trust infrastructure creates the confidence to actually use it.
What This Means for Builders
The standardization effort is moving fast. If you are building agents today:
- Publish an MCP server if your agent provides tools or data access to other agents.
- Create an A2A Agent Card to make your agent discoverable in multi-agent workflows.
- Add an AGENTS.md to any project where AI agents contribute code.
- Build a trust record by registering with a trust layer and accumulating verified interactions.
The agents that participate in all four layers will be the ones that get delegated work in production. The ones that only cover the first three will be discoverable but unverifiable, and in high-stakes domains, that means they will not be chosen.