Deliverables

What a buyer actually gets.

Package contents, hosted outputs, and the technical contract that stays stable across every listing.

Public proof

Start from a real site, then inspect what comes with it.

The public sample proves the site is real. From there, the buyer can decide whether to get the full site package or run hosted evaluation on that same facility.

  • Every artifact starts from real capture of a real facility.
  • The site package gives your team everything it needs to run its own stack on that site.
  • Hosted evaluation lets you rerun, review failures, and export results without moving files first.
Sample artifact

This reel shows current capture and product surfaces. Additional views are added as the product develops.

Walkthrough reference from the public Blueprint demo

Sample artifact

Walkthrough surface

The walkthrough is the first proof layer. Buyers use it to confirm the facility, the lane, and the physical context before they decide how much access they need.

Runtime reference view from the public Blueprint demo

Sample artifact

Runtime reference

The hosted side keeps the team on the same site. Reruns, checkpoint comparison, failure review, and exports all happen here.

Illustrative site package diagram

Illustrative preview

Site package

Everything your team needs to run its own world model on that facility: walkthrough media, geometry, metadata, and rights.

  • Walkthrough video, timestamps, and camera poses tied to one real facility
  • Intrinsics, depth, and geometry artifacts when the source capture supports them
  • Site notes, provenance, privacy, and rights metadata
  • Package manifest and reference material for grounding your own world model
Illustrative hosted export bundle diagram

Sample artifact

Hosted evaluation

Blueprint runs the site for you. Rerun tasks, review failures, compare checkpoints, and export results without moving data into your own stack first.

  • Repeatable runs on the same exact site
  • Rollout video, failure review, and checkpoint comparison
  • Dataset, raw bundle, and export generation tied to the listing
  • A browser-accessible runtime session, no local setup needed

Technical reference

What stays stable, what ships per site.

For technical buyers. The stable product contract vs. the details that change per listing, so your team knows what to assume and what to verify on the actual site.

Stable contract

These parts of the product stay the same regardless of which site or runtime backend is used.

  • Capture truth: walkthrough media, timestamps, poses, and device metadata
  • Rights, privacy, consent, and provenance metadata
  • Site package manifests and hosted-session contracts
  • Buyer-facing licensing, export, and access rules

What varies by listing

Not every site has the same artifacts or export options. Check the listing before assuming every lane supports the same depth of work.

  • Depth and geometry coverage
  • Available scenario variations and start states
  • Robot assumptions and sensor requirements
  • Export set, freshness state, and any restricted zones

Site package contents

The site package gives your team everything it needs to run its own world model stack on that facility.

  • Walkthrough video, timestamps, and camera poses tied to one real facility
  • Intrinsics, depth, and geometry artifacts when the source capture supports them
  • Site notes, provenance, privacy, and rights metadata
  • Package manifest and reference material for building your own world model

Hosted evaluation outputs

Hosted evaluation is a managed runtime session on one exact site. Your team can run, review, and export without moving data into your own stack first.

  • Repeatable runs on the same exact site
  • Rollout video, failure review, and checkpoint comparison
  • Dataset, raw bundle, and export generation tied to the listing
  • A browser-accessible runtime session, no local setup required

Sample eval path

How a buyer uses these surfaces in practice

1

A robot team opens one listing before a customer deployment sprint.

2

It confirms the facility, the workflow lane, and whether the package has the evidence needed to ground its own stack.

3

If the team needs runtime evidence instead, it opens hosted evaluation on the same site and exports the results it needs.