The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
LTtestMay10 — per-clip stride=1 30 fps test set
218 self-contained clips for pose / depth evaluation. Each clip is 41 frames at 30 fps stride=1 (1.37 s real time, 2.56 s when played at 16 fps), undistorted to a pinhole 832×480 model. All 6 surround cameras are included; DC depth is provided for the front camera only (matching the original LongtailTest).
Sourced from 126 KEEP UUIDs of NVIDIA's PhysicalAI-Autonomous-Vehicles dataset across chunks 234–237. The pipeline (download raw → ftheta undistort → DepthCrafter) is identical to the original LongtailTest, but applied at the native 30 fps cadence (no temporal downsampling).
Layout
chunk_234.tar # all kept clips for chunk 234 (~1.5–2 GB each)
chunk_235.tar
chunk_236.tar
chunk_237.tar
manifest_clips.jsonl # 218 entries {chunk, uuid, clip_id, window_start, displacement_m}
kept_clips.txt # 218 lines chunk\tuuid\tclip_id
dropped_clips.jsonl # 34 entries (displacement < 2 m within 1.37 s)
README.md
After extracting all four chunk tars:
chunk_NNN/<uuid>/clip_NNNNNN/
├── front.mp4 # 41 frames, 832×480, h264 CRF 18, 30 fps
├── cross_left.mp4 # same spec
├── cross_right.mp4
├── rear_left.mp4
├── rear_right.mp4
├── rear_tele.mp4
├── front_depth.pt # {'depth_sequence': (41, 1, 512, 832) fp16,
│ 'source_indices': [start..start+40]}
├── pose.pt # {'T_anchor_front': (11, 4, 4) world_from_cam in OpenCV,
│ first anchor = identity} (FRONT camera),
│ 'T_anchor_all': (6, 11, 4, 4) — same convention, one per view,
│ ordered by sensor_order; T_anchor_all[0] == T_anchor_front,
│ 'sensor_order': list of 6 camera names}
└── meta.pt # K (front, 3,3), K_all (6, 3, 3) target pinhole intrinsics,
E_rig_front (4,4), E_all (6, 4, 4) sensor extrinsics,
sensor_order [6 names], view_files [6 mp4 names],
frame_indices, timestamps_us, anchor_clip_idx,
anchor_src_idx, anchor_t_us, src_fps=30, stride=1,
window_start, chunk, uuid, clip_id, anchor_displacement_m
Per-UUID windows
Two 41-frame windows per UUID at 30 fps source frames:
| clip_id | window_start | end (exclusive) | real time start |
|---|---|---|---|
clip_000000 |
0 | 41 | 0.00 s |
clip_000001 |
300 | 341 | 10.00 s |
The two windows are 10 s apart in real time, providing diverse trajectories per UUID.
Pose anchors
T_anchor_front has 11 anchors at clip-frame indices 0, 4, 8, …, 40 (i.e. every 4th clip frame). At stride=1 from 30 fps, this equals real-time spacing ≈ 0.133 s, total span 1.37 s. Anchors are world_from_cam in OpenCV camera frame, with the first anchor forced to identity:
T_anchor_front[i] = inv(T_world_front[0]) @ T_world_front[i]
T_world_front(t) = T_world_rig(t) @ E_rig_front
T_world_rig(t) is interpolated from egomotion.offline.parquet via SLERP (rotation) + linear (translation) at the 11 anchor camera timestamps. E_rig_front comes from sensor_extrinsics.offline.parquet.
meta.K is the target pinhole intrinsics (constant across UUIDs):
fx = 400.0, fy = 411.0, cx = 415.0, cy = 338.0 # output 832×480
Motion filter
Clips with ‖T_anchor_front[10][:3,3] − T_anchor_front[0][:3,3]‖ < 2.0 m are excluded from the tarballs (not present at all). The dropped 34 clip ids are listed in dropped_clips.jsonl for transparency. 124 of the 126 UUIDs have at least one clip that passes the filter; 2 UUIDs (in chunk 236) have both clips dropped (parked / heavy traffic).
Counts
| chunk | UUIDs | clips total | kept (≥ 2 m) | dropped |
|---|---|---|---|---|
| 234 | 30 | 60 | (see manifest) | (see dropped_clips) |
| 235 | 35 | 70 | (see manifest) | |
| 236 | 34 | 68 | (see manifest) | |
| 237 | 27 | 54 | (see manifest) | |
| total | 126 | 252 | 218 | 34 |
Loading example
from huggingface_hub import snapshot_download
import torch, av
snapshot_download("luuuulinnnn/LTtestMay10", repo_type="dataset",
local_dir="LTtestMay10", allow_patterns=["chunk_234.tar","kept_clips.txt"])
# extract chunk tars locally, then:
clip_dir = "LTtestMay10/chunk_234/<uuid>/clip_000000"
pose = torch.load(f"{clip_dir}/pose.pt")["T_anchor_front"] # (11, 4, 4)
depth = torch.load(f"{clip_dir}/front_depth.pt")["depth_sequence"] # (41, 1, 512, 832) fp16
meta = torch.load(f"{clip_dir}/meta.pt") # K, E_rig_front, ...
container = av.open(f"{clip_dir}/front.mp4")
Provenance
- Raw video:
nvidia/PhysicalAI-Autonomous-Vehicleschunks 0234–0237 (camera_front_wide_120fov, 30 fps, 1920×1080 ftheta) - KEEP UUIDs: filter on Qwen2.5-VL-7B captions (sunny / bright_day / clear); 126 UUIDs across 4 chunks
- ftheta undistort: own reproduction of the original ray_wan pipeline (math verified pixel-equivalent within mpeg4 noise floor)
- DepthCrafter: tencent/DepthCrafter, max_res=1024, 5 inference steps, window_size=110, overlap=25, run on 30 fps undistorted videos
- Pose GT: SLERP+linear interpolation of
egomotion.offline.parquetat per-clip anchor timestamps; mean translation diff vs original LongtailTest GT is ~1.4 m on ~80 m trajectories (≈ 1.7 %, within egomotion sub-frame timestamp accuracy)
- Downloads last month
- 66