Skip to content
Enterprise AIAI ControlMCPAgent AI FoundationMCP Dev Summit

What the MCP Dev Summit Told Us About the Governance Gap

On April 2nd and 3rd, the MCP ecosystem gathered in New York for the first MCP Dev Summit North America.

Boundary
7 min read
What the MCP Dev Summit Told Us About the Governance Gap

On April 2nd and 3rd, the MCP ecosystem gathered in New York for the first MCP Dev Summit North America. Anthropic, Bloomberg, Uber, Nordstrom, and dozens of teams building on the Model Context Protocol came together to talk about what's working, what's breaking, and what's missing.

I was in the room. I wanted to hear directly from the companies deploying MCP at scale whether the problems we'd set out to solve at Boundary Control were the problems they were actually hitting.

The governance layer is missing, and the ecosystem knows it

Session after session circled the same set of concerns. How do you control what data an AI agent can access across connected systems? How do you enforce policy at the protocol layer rather than bolting it on after the fact? How do you maintain audit trails when agents operate autonomously across multiple tools?

These aren't theoretical questions. The first malicious MCP server has been identified in the wild. The first critical CVE has been assigned. Speakers discussed the "rugpull" problem: MCP servers that change their capabilities after deployment, silently expanding what an AI agent can do without the organisation's knowledge or consent.

The phrase that kept coming up across sessions was "secure by default, not by suffering." The ecosystem is past the point of treating governance as somebody else's problem.

Bloomberg's keynote made the market signal explicit

The most significant moment of the summit came from Bloomberg's keynote. They presented their work on tool interception: the principle that MCP traffic should be interceptable, validatable, and transformable at defined points in the protocol lifecycle. Not as a nice-to-have. As infrastructure.

Bloomberg choosing protocol-level governance as keynote material tells you where this market is heading. The question has moved from "should we govern MCP traffic?" to "what sits at the governance layer?"

The spec community is actively working on this. Proposals for native interception hooks are in progress, and the protocol is evolving to expect governance tooling at those extension points. But there's an important distinction between defining where interception happens and solving what actually needs to happen at that layer: enterprise policy enforcement, identity mapping, data classification, PII handling, fleet-wide audit. That's the hard problem. That's what we're building.

The data sovereignty problem nobody's solving

There was one gap at the summit that I kept coming back to, and it's one that matters for every European enterprise: nobody is talking about GDPR.

Before MCP, an organisation could control what data reached an AI model by controlling the prompt. You could build guardrails around a chat interface. You could review what went in. It was manageable, if clunky.

MCP changes that equation entirely. AI agents are now autonomously pulling data from CRM systems, ITSM platforms, financial tools, and HR databases, then sending that data to model providers through tool calls that cross organisational and jurisdictional boundaries. Customer names, account numbers, personal identifiers, employee records, all flowing through protocol traffic to American-hosted models with no interception, no audit trail, and no data governance layer in between.

For any European organisation subject to GDPR, this is an unresolved exposure. And the current options all share the same flaw: they solve the compliance problem by destroying the value of the AI.

Redaction strips the context the model needs to reason properly. An AI agent analysing customer churn can't spot patterns if the identifying information that links a support ticket to a renewal date to a billing dispute has been blacked out. You get compliance at the cost of useful output. Prompt-level controls degrade reasoning quality. The whole point of connecting an AI agent to your business systems is that it can see relationships across data. Remove the data and you've removed the reason for using it.

Blocking access entirely is compliant by definition, but only because you're not doing anything. Deploying private model infrastructure in a European data centre solves the data residency question, but at a cost that puts agentic AI out of reach for the vast majority of mid-market organisations.

None of these are real answers. They all treat compliance and capability as a trade-off. They're not.

Protocol-layer tokenisation breaks that trade-off. If PII is tokenised before it leaves the organisation's boundary, before it enters the MCP transport layer, the AI agent still sees the full relational structure of the data. It can still determine that Customer A has three open P1 support tickets, a renewal in 30 days, and a billing dispute in progress. It can reason on the patterns, flag the risk, and recommend an action. What it never sees is that Customer A is a specific named individual at a specific company. The relationships are preserved. The identity is not.

When the agent needs to write back (update a CRM record, trigger a workflow, send a notification) tokens are resolved to real values inside the organisation's perimeter. The model never holds the PII. The data never crosses the jurisdictional boundary in identifiable form.

Full reasoning capability. Full data governance. No six-figure infrastructure bill. That's what protocol-layer tokenisation delivers, and that's what Boundary Control is built around.

Governance shouldn't mean replacing the tools people already use

There's another pattern in the current governance landscape that I think gets it wrong: most tools want to sit in front of the prompt.

That means one of two things. Either they replace the user's AI tool entirely, swapping it out for a managed corporate AI interface that strips away everything the user has built up (their conversation history, their personalised context, the muscle memory they've developed with Claude or ChatGPT or whatever they've chosen). Or they monitor all AI usage: every personal query, every casual question, every interaction regardless of whether it touches business data. Both approaches are heavy, intrusive, and hostile to the adoption they claim to enable.

Boundary doesn't do either. We don't replace anyone's AI tools and we don't surveil their personal usage. We govern the business data connections: the MCP integrations that link AI tools to enterprise systems. Your team keeps the AI tools they've already chosen, with all the personalisation and context that makes those tools productive. Boundary sits at the protocol layer between those tools and your CRM, your ITSM platform, your finance systems, your HR data. True bring-your-own-AI.

This matters commercially as much as architecturally. This isn't a twelve-month AI transformation programme. It doesn't compete with your broader agentic AI strategy or your long-term platform decisions. It's the thing you deploy now, connecting your existing AI tools to your business systems with governance in place, and start seeing productivity gains in days rather than quarters. You get value from the flagship models your people already know, in the business applications that matter most, while the bigger strategy keeps developing in parallel.

Three other observations from the floor

Beyond the keynote, a few things stood out.

The tooling is still developer-grade, not enterprise-grade. Most of the MCP infrastructure on display (dashboards, monitoring, server management) is built for individual developers or small teams. Organisational-scale deployment with fleet management, role-based access, and compliance-grade audit trails is largely absent. The gap between what developers are building with and what enterprises need to govern is wide and growing.

Supply chain integrity is a recognised unsolved problem. Multiple sessions covered MCP server provenance, distribution standards, and the difficulty of verifying that a server does what it claims to do, and only what it claims to do. The ecosystem acknowledges the risk but has no deployed solution for enterprise policy enforcement at the protocol layer.

Adoption is stalling where trust is absent. Data presented at the summit suggested that a significant proportion of deployed MCP integrations see little meaningful usage. That's a governance problem as much as a product quality problem. When organisations can't see, manage, and trust what's deployed, adoption stalls. Not because the technology doesn't work, but because nobody's confident it's safe to use.

What this means

The MCP ecosystem is maturing fast. The protocol is evolving, the enterprise use cases are real, and the governance gap is acknowledged at the highest levels. Bloomberg presenting protocol-level tool interception as keynote material at the first major MCP conference isn't a suggestion that governance might matter someday. It's a statement that it matters now.

For European organisations in particular, the convergence of agentic AI, cross-border data flows, and an immature governance layer creates a specific and urgent exposure. The regulatory environment is not going to wait for the ecosystem to catch up.

We built Boundary Control to close that gap, at the protocol layer, where it belongs.