My father asked me a simple question.

“How do you know the models on your own laptop are the models they say they are?”

He wasn’t trying to be deep. He’s a careful person who runs models locally the same way I do. He wanted a straight answer.

I did not have one.

I knew the names. meta-llama/Llama-3.2-1B-Instruct. Qwen/Qwen2.5-14B-Instruct. tiiuae/falcon-mamba-7b-instruct. I knew where they came from in the abstract: Hugging Face Hub, Ollama Library, a few ollama pull commands months ago I no longer remembered specifically. I knew enough to build with them.

I did not know — and could not, on the spot, show — that the bytes sitting on my disk under those names matched a signed Fall Risk registry record. I knew the names and trusted the path that delivered them. That’s not the same as knowing.

So I built the tool to answer the question. Then I ran it on my own machine.

The first scan

I ran the Hugging Face cache scan first, because that’s the obvious place.

I expected maybe a dozen models. I had downloaded Qwen, Llama, TinyLlama, some test models for the research program, a couple of mamba variants. I remembered most of them.

Trustfall found 40 model groups spanning 63 artifacts taking up 220 GB of disk.

Forty.

I had not pulled forty models. The HF cache had pulled forty models on my behalf — through huggingface-cli calls I’d forgotten, through transitive dependencies of libraries I’d installed for research experiments, through fine-tune snapshots that left their base models behind, through alternate revisions of the same model_id that Hugging Face had cached when I’d run something at a specific revision. The 40 was the accumulated state of a year’s worth of local AI work, none of which I had been deliberately tracking.

Of those 40, 8 were verified against the Fall Risk signed registry — known artifacts, signed registry records, exact hash match. 32 were unknown variants — artifacts whose model_ids the scanner could parse from the cache layout but whose sha256 didn’t match any signed record we had on file.

That was the moment. Not the gigabyte count. The unknown count.

I built the registry. I built the scanner. I had thought, going in, that my own laptop would be the gold standard — every model accounted for, every hash matching. The actual answer was: 8 of 40.

The second discovery

I almost shipped Trustfall as a Hugging Face cache scanner and called it done.

Then I added Ollama.

Hugging Face and Ollama do not store models the same way. Hugging Face uses cache snapshots — models--Org--Name/snapshots/<revision>/model.safetensors laid out under your home directory in a structure that mirrors the public Hub. Ollama uses an OCI-style content-addressed store — manifests describe “images,” each manifest references “layers” by sha256, and the actual weight blobs sit in a blobs/sha256-<hex> directory addressed by their content hash. Two completely different worldviews about what a local model is.

Trustfall Lite v0.2.0 covers both.

I ran it again on the same machine, with Ollama support enabled.

Ollama added another 48 model groups, 48 artifacts, 380 GB.

Combined total on a single developer laptop:

88 model groups · 111 artifacts · 600.45 GB

I built the tool, and even I did not know that much local model state was there.

The Hugging Face surface I had at least partially remembered. The Ollama surface I had not. ollama pull is so frictionless that I had pulled models I no longer recognized as existing. dolphin-mixtral. falcon3:7b. gemma2:9b and gemma2:27b separately. codestral-mamba-7b-q4_k_m:latest that I must have pulled to evaluate something and forgotten about.

A developer’s local AI surface is not one folder. It is a fragmented supply chain across tools — each with its own assumptions, its own naming conventions, its own reasons for not telling you what’s actually there.

This is not a theoretical concern. Ollama users have explicitly asked how to verify locally that the weights they pulled match the advertised model artifact (Ollama issue #13080). Trustfall does not close that issue, but it addresses the same need: making the local model surface inspectable rather than opaque.

I built the tool to answer my father’s question. The first answer it gave back was: there’s more local model state on your machine than you remembered.

The rerun

The first scan gave me a queue, not a verdict.

After that first run, the registry grew. Hugging Face coverage expanded. Ollama became a signed artifact-identity lane. The public registry went from 75 records to 211: 165 structural-evidence records and 46 artifact-identity records.

Then I ran Trustfall again on the same machine.

The Hugging Face side moved from 8 verified groups to 15 of 40. That is the apples-to-apples comparison: same local HF surface, larger registry.

The full launch scan was larger. Across Hugging Face and Ollama together, Trustfall found 93 model groups, 116 artifacts, and 629.18 GB of model weights. Sixty-six groups verified: 15 from Hugging Face and 51 from Ollama. Twenty-seven remained unknown variants.

That is not a perfect score. It is something more useful: a map.

The first scan found the gap. The second scan showed the infrastructure closing it.

The Ollama result deserves precision. It is not structural identity. It is artifact identity: local Ollama blobs matching signed registry records. That is exactly what Trustfall Lite is supposed to verify. Structural-evidence records and artifact-identity records answer different questions. The registry declares the difference instead of hiding it.

The April scan was preserved as a locked summary, not as a full JSON artifact, so this comparison is summary-level rather than artifact-by-artifact:

April prototype:
  75 registry records
  88 local model groups
  111 artifacts
  600.45 GB
  8 verified

May launch:
  211 registry records
  93 local model groups
  116 artifacts
  629.18 GB
  66 verified

That is the shape of the change. The machine did not become simple. The registry became more capable of telling the truth about it.

The rerun was verified against:

Registry v0.2.3 · 211 signed records
  165 structural-evidence records
   46 artifact-identity records
Manifest 0568fe38fc3fb4801b016450d23d2fce963f523204eb105db59fa4755ff13846
Issuer  fallrisk-96cd5e6a01e1

The first scan did not prove the tool was finished. It proved the gap was real. The rerun did not make the gap disappear. It made the gap smaller, typed, and measurable.

What the scan actually shows

Trustfall reports each model group with one of four states:

Verified artifact. The local sha256 matches a signed Fall Risk registry record under the issuer key fallrisk-96cd5e6a01e1. The local bytes match the enrolled artifact exactly.

Unknown variant. The local artifact’s model_id can be inferred — from the HF cache directory name or the Ollama manifest — but its sha256 doesn’t match any signed registry record. This usually means: an alternate revision, a post-enrollment update, a different quantization, a custom Modelfile composition, or simply an artifact that hasn’t been enrolled yet.

Not enrolled. The artifact is recognizable as model weights but the scanner cannot infer a model_id, and its sha256 isn’t anywhere in the registry.

Pilot enrollment available. A path for getting this artifact added.

Of my 93 model groups, 66 verified, 27 unknown variants, 0 not enrolled.

Here’s the sentence I want to be very clear about:

Unknown variant does not mean unsafe. It means the local artifact or Ollama blob is not yet represented by a signed Fall Risk registry record.

The 27 unknowns on my laptop are not malicious, poisoned, or compromised. Most are Hugging Face revisions, older checkpoints, sharded snapshots, embedding models outside the current enrollment scope, or architecture families still waiting on engine patches. Two are local Ollama quantized variants that were not enrolled in the v0.2.3 artifact lane. That’s the registry’s job to close, not the scanner’s job to flag as a problem.

The scan tells you what’s there. The registry tells you which artifacts have signed Fall Risk records. The gap between the two is the enrollment queue.

What Trustfall Lite is

Trustfall Lite is a small, open command-line tool. You install it with:

pipx install fallrisk-trustfall

You get five commands:

scan      Scan paths for model artifacts and verify against the registry.
verify    Look up a single SHA-256 in the registry.
registry  Inspect and manage the local registry snapshot.
diff      Compare two scan-output JSON files and report changes.
version   Print version, snapshot version, and issuer kid.

You run:

trustfall scan

It walks your Hugging Face cache and your Ollama store, hashes every model artifact, and verifies each hash against the Fall Risk signed registry — a JWS-signed JSON document published at https://attest.fallrisk.ai/registry.json under issuer key fallrisk-96cd5e6a01e1.

Output is grouped by source, with a global summary at the bottom that deduplicates shared blobs (Ollama tags often share underlying weights). Default behavior verifies bytes by hashing every blob; a fast-path mode (--trust-ollama-filenames) reads the digest from Ollama’s content-addressed filenames instead, for users who care more about speed than about catching local filesystem corruption.

JSON output is machine-readable for CI, audit, or tooling. By default it does not surface filesystem paths — local home directories should not leak into bug reports. Pass --include-paths to opt in.

The registry is freely queryable. The JWKS is public. The verification path is inspectable.

That is the Lite product.

What Trustfall Lite is not

This release does not do, and is not intended to do:

The clean way to draw the line:

Lite verifies artifacts. Deep verifies runtime structural identity.

Lite is free, open, and runs locally on your laptop. Large Ollama stores may take several minutes to hash; the fast path is explicit. Deep is the runtime structural identity layer behind the Fall Risk research program: published research, formal verification artifacts, and protected implementation details. It serves a different question from Lite and is not part of this release.

Why this matters now

Two things are happening simultaneously.

One: local models are being wired into agents, tools, IDE extensions, code-completion daemons, document analyzers, voice assistants, browser plugins, and small business workflows at a pace that exceeds anyone’s ability to track manually. The number of models running locally on developer machines today is materially higher than it was twelve months ago. Most of those models came down through pip, huggingface-cli, ollama pull, or some IDE’s plugin manager — none of which surface what they pulled in a way you could audit later.

Two: the same period has seen a sharp increase in techniques for modifying open-weights models after publication: abliteration tools that remove safety behavior at the activation-geometry level, custom distillation recipes that pull behavior from larger models into smaller ones, fine-tunes published under names that resemble base models, and quantization pipelines with their own packaging that produces different bytes from the originally published artifact.

The combination means: more local model state, less of it traceable to the artifact a publisher originally released, and most users have no easy way to see the gap between what they think they’re running and what’s actually on disk.

Trustfall Lite is the smallest possible utility that closes that specific gap: what is on this machine, and which of those things have signed Fall Risk registry records.

It is not the whole story. The whole story includes runtime measurement (Trustfall Deep), abliteration detection (the broader research program), and registry expansion to cover more of the open-weights ecosystem. But the first and cheapest step is making the local surface visible, and that’s what this release does.

What comes next

The scans generated their own roadmap. The 27 unknown variants on my laptop are the first cohort of the next enrollment queue — the older HF revisions, the embedding models, the Mamba/SSM family checkpoints waiting on engine patches, and the local Ollama quantized variants not yet in the artifact lane. Closing the gap between “names recognized” and “bytes signed” is the immediate priority for the next 90 days.

Beyond that:

Run it on your own machine

If you have Hugging Face models or Ollama models locally, the install is one line and the scan is one command.

pipx install fallrisk-trustfall
trustfall scan

The scan is local. The registry lookup is via signed JSON over HTTPS. No model bytes leave your machine. By default, Trustfall may send artifact hashes to the verification API for lookup; local-only verification is available after refreshing the signed registry snapshot (trustfall registry --refresh, then trustfall scan --local-only). Filesystem paths are not sent unless you pass --include-paths. No telemetry, no analytics, no account.

You will probably find more on your machine than you remembered.

If you do, write me. anthony@fallrisk.ai. I want to hear what other people’s first scans turned up — particularly the cases where the scanner sees something the registry doesn’t yet cover. Those are the cases that drive what gets enrolled next.

The point of the tool is to make the local model supply chain visible, not to grade it. The grading takes care of itself once you can see what’s there.