04 — Verdict

What we conclude

After reviewing 10 verified contracts on Snowscan, decompiling the remaining 16, tracing 65 transactions, and reading every event log, here is our assessment. We aim to be fair.

What's genuinely impressive

We want to be clear: there is real engineering here. This is not a fork of Uniswap with a new logo. The protocol addresses a genuine gap in DeFi — structured credit products don't exist on-chain in any meaningful way. And the architecture, while incomplete, shows thoughtful design.

Tranched yield waterfall

The 3-tranche waterfall (Senior 70% / Mezzanine 20% / Equity 10%) with Synthetix-style O(1) claiming is correctly implemented. The priority-of-payments logic is sound. The reentrancy protections around invest() and withdraw() that atomically update yield state prevent dilution attacks. This is the most complete subsystem.

CDS factory pattern

The CDSRegistry creating standalone CDS contracts via CREATE is a clean design. Each CDS has a proper lifecycle (fund → buyProtection → payPremium → settle/expire). CDS #0 with 9 premium payments shows the mechanics actually work. The pattern of a factory producing autonomous financial instruments is architecturally sound.

Hub-spoke collateral model

The VaultRegistry (hub) and CollateralManager (spoke) design with 3-layer message authentication for cross-chain operations is well-thought-out. The 110% collateral ratio with 500bps liquidation penalty and 300bps buffer on the spoke side shows understanding of cross-chain timing risks. The design is good even though it's never been exercised.

AI governance timelocks

The pattern of AI agents proposing actions that go through timelocks with governance veto is the right approach for AI-in-the-loop DeFi. Rate limiting on credit event reports, minimum confidence thresholds, and proposal expiry show mature thinking about AI safety in financial systems.

What doesn't match reality

"AMM bonding curve pricing" for CDS

This is the clearest factual discrepancy. The marketing site says CDS has "Bonding curve pricing auto-adjusts with demand." The deployed CDS contracts use a fixed rate of 250 basis points set at creation time. No AMM, bonding curve, or dynamic pricing logic exists in the decompiled bytecode (CDSRegistry is not verified on Snowscan). This feature may be planned for a future version, but it is not deployed.

"Cross-chain margin engine via ICM/Teleporter"

The Messenger contract is custom-built (not the standard Avalanche Teleporter, though it implements the receiver interface). It has sent zero messages. Both hub and spoke are on the same chain. The architecture is designed for cross-chain — but describing it as a working cross-chain engine goes beyond what's deployed. Deploying to a second L1 in a 6-week hackathon may not have been feasible, but the marketing doesn't make that distinction.

"A live frontend you can interact with today"

As of four days after finals, the frontend redirects to a waitlist. The "Try the Live App" button leads to an email form reading "you@institution.com." The contracts are on a public testnet and can be called directly. The team may be planning a staged rollout, but the marketing claim of a live, interactive frontend does not match the current experience.

Illustrative TVL in UI mockup

The marketing site shows a UI mockup displaying "Total TVL: $1.7M" with Senior $1.0M, Mezzanine $500K, Equity $200K. The actual testnet TVL is ~$206K (200K in Senior, 6K in Equity, zero in Mezzanine). Real investments exist, but the mockup numbers are illustrative and roughly 8x the on-chain reality.

What we couldn't find

Full repository & test suite

10 of 26 contracts have verified source on Snowscan. But no GitHub repository, test suite, or build artifacts are publicly linked — the 692 tests and 10K fuzz runs remain unverifiable outside the Build Games judging process.

HedgeRouter

Listed as a feature ("Atomic invest-and-hedge router") but not found among deployed contracts. May exist in source but wasn't deployed during the competition.

Broad user participation

One test user (0x1e3f) completed tranche investments (200K Senior, 6K Equity). The other four hold MockUSDC but did not invest. Only one address exercised the core invest flow.

AI agent activity

Zero on-chain proposals, detections, or governance actions. The infrastructure is deployed but the AI integration layer hasn't produced on-chain activity.

External oracle feeds

All prices are admin-set via AssetRegistry. No Chainlink, no TWAP, no external price source — appropriate for testnet but notable for a protocol targeting institutional use.

Security audit

No security audit mentioned on the website. Expected for a 6-week hackathon project, but relevant context for anyone considering the protocol's maturity.

What we'd ask the team

  1. Will the source code be made public? A GitHub was submitted for Build Games judging. Publishing it would let the community verify the 692 tests and 10K fuzz runs claims independently.
  2. What does "57 contracts" count? We found 29 on-chain. If the remaining 28 are interfaces, libraries, or abstract contracts, clarifying this would address a common question.
  3. What's the timeline for AMM/bonding curve pricing? The deployed CDS contracts use fixed-rate pricing. Is dynamic pricing on the roadmap, or was it described aspirationally?
  4. When will the frontend open? The contracts are on a public testnet. Is there a planned timeline for removing the waitlist gate?
  5. Has the cross-chain system been tested off-chain? Zero messages on-chain through the Messenger. Was cross-chain functionality validated in a local or staging environment?

The bottom line

The architecture is real. Some of the marketing claims go beyond what's on-chain.

Meridian Protocol demonstrates genuine understanding of structured credit mechanics, built in a 6-week competition window. The yield waterfall, CDS lifecycle, and hub-spoke collateral model are well-designed. The Build Games judges evaluated this on "long-term potential" and "technical execution relative to the project's stage" — and the architectural ambition is clear.

Where we see a gap is between the marketing language and the on-chain reality. The CDS "AMM bonding curve" is a fixed rate in bytecode. The "cross-chain margin engine" has sent zero messages on a same-chain deployment. The "live frontend" is behind a waitlist. These may reflect roadmap aspirations presented as current features, or features that exist in source code but weren't deployed to testnet.

This audit presents what we found. The contracts are on a public testnet and every finding here is independently verifiable. We encourage the team to publish their source code so the full picture — including the 692 tests and features not yet deployed — can be evaluated alongside the on-chain evidence.

All data sourced from Avalanche Fuji testnet (chain 43113). Every finding is independently verifiable. Contract addresses are complete and untruncated throughout this site.