By Lars Kroehl, Managing Director, CryptoKRI GmbH / MolTrust
The headline and the part that isn't the headline
On April 17, 2026, PYMNTS Intelligence and Trulioo released Scale Amplification: How Revenue Amplifies Agent-Driven Identity. The headline number: 71% of firms with more than $1 billion in annual revenue report agent-driven identity threats.
The other 29% probably just haven't noticed yet.
The report — a clean, well-executed piece of research built on a survey of 350 global companies across financial services, gig platforms, marketplaces, retail, software, and travel — is polite about what the numbers mean. Its two most interesting findings sit in the middle of the document and describe an architectural problem that the industry has so far treated as a tooling problem. Those two findings deserve a less polite reading.
Finding #1: Threats scale with revenue. With better equipment.
Bigger firms are not just attacked more often. They're attacked with better tooling. Deepfakes, AI-generated identity documents, automated scraping — the premium product line of agent fraud — is concentrated at the top of the revenue distribution. Synthetic identity fraud, the Frankenstein-ID classic, stays with the smaller firms. There is apparently a market segmentation in fraud too.
There's nothing random about it. As a company grows across products, channels, geographies, and integrations, its attack surface grows with it. More APIs. More partner agents. More jurisdictions. Each verification point is a potential gap, and at scale there are a lot of them.
Revenue doesn't cause fraud. Revenue buys complexity, and complexity ships with free exposure.
Finding #2: In-house is the most expensive option
Here the report becomes interesting in a way the press release is not.
Companies that keep identity management strictly internal report the highest rates of Know-Your-Agent incidents and losses. Hybrid and third-party models report fewer.
The largest firms are also the most likely to keep KYC and KYB in-house — for reasons that sound entirely sensible in a boardroom: control, compliance lineage, data residency, the CISO would prefer it. And the largest firms report the most incidents.
This is not a coincidence. It's a design flaw with nicer carpeting.
An internal-only identity stack is optimized for a closed world where every legitimate counterparty is known, stable, and managed by the same organization. Autonomous agents don't live in that world. They arrive at your verification endpoint without a prior relationship, without a human in the loop at the moment it matters, and often without a shared identity provider with anyone you've ever met. A system that was built to trust only what it issued itself has exactly two responses to something new: reject it, or approve it with too much confidence.
The same report confirms both happening at once. Larger firms report rising false positives, rising decline rates, and rising onboarding delays. The defense isn't failing silently. It's failing loudly, and it's costing customers.
This is an architecture problem. Tools can't fix it.
The reflex on reading a report like this is to go shopping. Better deepfake detection. Better document forensics. Better behavioral biometrics.
Buy them. They matter. But they solve the wrong half of the problem.
An agent-identity question has two parts:
- Is this agent real? — detection (deepfakes, synthetic docs, bot patterns)
- Is this agent authorized — by whom, to do what, for how long? — authority
Detection lives inside a platform. It doesn't travel. When the same agent shows up at a different platform tomorrow, the verification work starts over, and any signal the first platform learned about that agent stays locked inside the first platform. Great for the vendor. Suboptimal for the agent, and for anyone trying to transact with it.
Question #2 is not a detection problem at all. It's a credential problem. Does this agent carry a verifiable, portable, revocable, time-bound mandate from a known principal? You cannot pattern-match your way to that answer. You can only check a credential the agent presents against an anchored root of trust.
Internal-only cannot solve this. Not because internal engineering is bad — but because authority between organizations is, by definition, a cross-organizational question. A firewall is the wrong shape of answer.
What portable agent trust actually looks like
At MolTrust we've been building this layer for the past year — not as a replacement for detection tooling, but as the layer underneath it. The stack is deliberately boring and deliberately open:
- W3C Decentralized Identifiers (DIDs) — every agent gets a resolvable, cryptographic identity not controlled by any single platform.
- W3C Verifiable Credentials (VCs) — the agent's principal issues a signed, revocable mandate.
- On-chain anchoring (Base L2) — any verifier can check against a public, tamper-evident root of trust, without calling back to the issuer.
On top of that foundation sits the Agent Authorization Envelope (AAE), which packages the three things a verifier actually needs to make a decision:
Agent Authorization Envelope
- MANDATE — what the agent is allowed to do
- CONSTRAINTS — the limits (spending, counterparties, required signals)
- VALIDITY — when the mandate starts, when it expires, when it was revoked
This turns the authority question into a yes-or-no answer a verifier computes locally, in milliseconds, without a central gatekeeper and without a platform lock-in.
It also happens to map cleanly onto where regulators are heading. Singapore's IMDA Model AI Governance Framework for Agentic AI (January 22, 2026) describes exactly this pattern: delegated authority, explicit scope, verifiable provenance. MolTrust is currently the only productive reference implementation with public partners, on-chain anchoring, and conformance to emerging A2A specifications.
This wasn't planned that way. It's just what happens when you build the thing before announcing the thing.
On Trulioo, since someone will ask
Trulioo is doing the most disciplined work defining the Know-Your-Agent category, and the PYMNTS reports are the cleanest industry data on where agent trust actually breaks. None of that is competitive with an open trust layer. It's the evidence that the problem is real, large, and no longer theoretical.
What the reports don't quite say out loud, but will: detection inside a platform is necessary and insufficient. The firms that are already lowering their incident rates with hybrid and third-party models are doing so because they've started to accept trust signals their own platform didn't generate. That is step one toward a portable trust layer. An open, standards-based, cryptographically anchored layer is the eventual endpoint.
You can get there in increments. You can even get there while the vendor pitch you picked last year keeps running in parallel. What you cannot do is get there by writing another internal runbook.
If the 71% includes you
- Spec: TechSpec v0.8, on-chain at Base L2 Block 44745864
- SDK:
@moltrust/sdkv1.1.0 on npm — middleware, register, verify (14/14 tests passing) - Conformance: A2A v0.3 Conformance on-chain (Blocks 44543913, 44543919)
- Resolver: uresolver.moltrust.ch (DIF Universal Resolver driver for
did:moltrust) - Pricing: Developer CHF 29 · Startup CHF 149 · Business CHF 499
- Contact: lars@moltrust.ch
The architecture is built, the anchor is public, the code is shipping. No decks, no call-backs, no please-register-for-the-webinar. Read the spec, install the SDK, or don't.
Either way, the 71% isn't going to improve on its own.
Sources: PYMNTS Intelligence & Trulioo, "Scale Amplification: How Revenue Amplifies Agent-Driven Identity" (April 2026); "How Enterprises Can Build a 'Know Your Agent' Defense" (March 2026); "Identity at Scale" (March 2026); IMDA Singapore, "Model AI Governance Framework for Agentic AI" (January 2026).
Build portable agent trust into your stack
The architecture is open. The spec is on-chain. The SDK is on npm.
Developer Quickstart Enterprise