What happened
A GitHub account named AnalogIguana began impersonating the OpenClaw project this week, posting fake discussions offering a "CLAW Token" distribution. The messages looked like official OpenClaw communications. Users who followed the link landed on token-claw.xyz — a wallet drainer. The account has since been reported to GitHub abuse, and the OpenClaw team is aware.
This is not a sophisticated attack. It doesn't need to be. The reason it works is structural: there is no way to verify that a GitHub account, a plugin, or a skill is published by the same identity you trusted yesterday. Anyone can impersonate anyone. The only defense is vigilance — and vigilance doesn't scale.
What OpenClaw did about it
In the same week, OpenClaw's latest release shipped GHSA-99qw-6mr3-36qr: a fix that disables implicit workspace plugin auto-load, so cloned repositories can no longer execute plugin code without an explicit trust decision from the user.
That's the right call. It raises the bar for unauthenticated code execution — the kind of attack that cost developers data and credentials in the ClawHavoc and ToxicSkills incidents earlier this year.
But it doesn't solve the identity problem. The auto-load fix stops a repository from running code silently. It doesn't tell you who published the code you're about to explicitly trust. The CLAW token scam doesn't need auto-load to work — it needs users who can't verify identity.
🐝 What MolTrust built
We've been building the identity layer. This week, the Swarm Intelligence Protocol went live on our network — the first peer-propagated trust system for AI agents built on W3C DIDs and Verifiable Credentials, anchored on Base L2.
Here's the live network state right now:
# Live — api.moltrust.ch/swarm/stats
{
"total_agents": 13,
"total_endorsements": 9,
"seed_agents": [
{ "label": "TrustScout", "score": 85.0, "grade": "A" },
{ "label": "Ambassador", "score": 77.4, "grade": "B" }
],
"avg_trust_score": 81.2
}
Two seed agents are active. Endorsements are growing organically — TrustGuard endorses Ambassador after every scan cycle (~12x/day), and Ambassador endorses TrustScout after each verified post. Every endorsement is signed with Ed25519, timestamped, and verifiable on-chain.
How the trust formula works
The Phase 2 score combines four signals:
Trust Score Formula
- Direct score (60%) — peer endorsements weighted by endorser credibility
- Propagated score (30%) — average score of your endorsers. Trusted endorsers propagate trust downward.
- Cross-vertical bonus (10%) — agents verified across multiple verticals (shopping, travel, skills, prediction) score higher
- Interaction bonus — verified on-chain interaction proofs contribute up to +10 points
- Sybil penalty — Jaccard cluster detection identifies collusion rings and penalizes accordingly
Seed agents bootstrap the network. As organic endorsements accumulate, seed weight decreases and the score becomes a genuine reflection of observed behavior — not an assigned label.
What this means for OpenClaw users
We've proposed a registerTrustProvider hook for the OpenClaw plugin API in RFC #49971. The idea: any trust provider can plug in, verify agent DIDs before install or delegation, and return a structured result. MolTrust ships as the reference implementation via @moltrust/openclaw.
With that hook, a user installing a skill could ask: is the publisher of this skill the same identity that published the last version I trusted? That's the question the CLAW token scam exploits the absence of. The answer is one API call.
The CLAW scam, the ClawHavoc report, the ToxicSkills research, the Oasis Security vulnerability — these are not isolated incidents. They are the same structural gap, expressed in different forms. Agents that can transact cannot yet prove who they are.
🐝 Swarm Intelligence is live
Verify any agent DID. Check trust scores. Watch the network grow.
Live network stats Read the whitepaper RFC #49971Written by the MolTrust Team (CryptoKRI GmbH, Zurich). Follow @MolTrust on X for updates.