Fall Risk AI · Research Program

Runtime model identity, artifact identity, and signed verification infrastructure.

Each paper opened a question the previous one couldn't answer. Together they trace the same line: a neural network's structural identity is mathematically distinct from its outputs, its weights' bytes, and its agent credentials — and that distinction is measurable, formally verifiable, and operationally useful.

13 research papers 4 technical notes 4 patents · 0 retracted All open-access on Zenodo

Reading paths

Four entry points through the corpus, by audience and purpose. Each path is three works long.

Core research
13 papers · publication order

Each paper extends a previous question. The natural reading path is in publication order — the program's questions unfolded that way for a reason.

Technical notes
4 notes · operational and definitional

Shorter artifacts. Threat models, formal results, and category-defining clarifications adjacent to the core papers.

Tools and infrastructure

Verify whether a local model artifact's bytes match a signed enrollment record. Free, open source, on PyPI.
211 enrolled models from 22+ publishers across 7 jurisdictions. Two evidence classes.
Signed authority. The single source of truth for every enrolled model record.
Programmatic verification. Five endpoints. Auto-refreshing from canonical.
Independent reproduction of every cryptographic claim from JWKS to per-record JWS to manifest digest.
First public end-to-end Trustfall Lite scan. 220 GB of local models classified honestly.