
Sample artifact
Walkthrough surface
The walkthrough is the first proof layer. Buyers use it to confirm the facility, the lane, and the physical context before they decide how much access they need.
Deliverables
Package contents, hosted outputs, and the technical contract that stays stable across every listing.
Public proof
The public sample proves the site is real. From there, the buyer can decide whether to get the full site package or run hosted evaluation on that same facility.
This reel shows current capture and product surfaces. Additional views are added as the product develops.

Sample artifact
The walkthrough is the first proof layer. Buyers use it to confirm the facility, the lane, and the physical context before they decide how much access they need.

Sample artifact
The hosted side keeps the team on the same site. Reruns, checkpoint comparison, failure review, and exports all happen here.
Illustrative preview
Everything your team needs to run its own world model on that facility: walkthrough media, geometry, metadata, and rights.
Sample artifact
Blueprint runs the site for you. Rerun tasks, review failures, compare checkpoints, and export results without moving data into your own stack first.
Technical reference
For technical buyers. The stable product contract vs. the details that change per listing, so your team knows what to assume and what to verify on the actual site.
These parts of the product stay the same regardless of which site or runtime backend is used.
Not every site has the same artifacts or export options. Check the listing before assuming every lane supports the same depth of work.
The site package gives your team everything it needs to run its own world model stack on that facility.
Hosted evaluation is a managed runtime session on one exact site. Your team can run, review, and export without moving data into your own stack first.
Sample eval path
A robot team opens one listing before a customer deployment sprint.
It confirms the facility, the workflow lane, and whether the package has the evidence needed to ground its own stack.
If the team needs runtime evidence instead, it opens hosted evaluation on the same site and exports the results it needs.