You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

UAVid-3D-Scenes

UAVid-3D-Scenes is a depth-estimation centric extension for the UAVid semantic dataset, organizing the original sequences based on the larger scenes they were captured in, providing undistorted RGB frames paired with metric depth maps obtained through COLMAP reconstructions and scaling.

📃 This dataset accompanies the paper TanDepth: Leveraging Global DEMs for Metric Monocular Depth Estimation in UAVs

License: CC BY-NC-SA 4.0 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 - see attached License.txt

What’s Included

At this time, the dataset contains:

  • undistorted_rgb/scene{ID}/{seq}_{frame:05d}.png
    Undistorted RGB frames, 512x1024 resolution
  • depth/scene{ID}/{seq}_{frame:05d}_depth.png
    Dense depth maps encoded into 3×8‑bit channels
  • intrinsics.json Per-scene pinhole camera intrinsics (in pixels)
  • poses.json
    Per-frame extrinsics exported from each scene’s COLMAP model, stored as 4×4 matrices. Structure:
    • top-level keys: scene0scene11
    • per-scene keys: {seq}_{frame:05d}
    • value: 4x4 matrix [[R|t],[0,0,0,1]]
  • colmap_cam_calib.txt Per-scene camera model paramaters estimated in COLMAP reconstructions (for the original UAVid frames) used to perform undistortion.

Depth Encoding (CARLA-style 24‑bit PNG)

Each depth map is stored as an RGB PNG whose 24 bits encode a scalar depth value in meters (range [0, 1000]), compatible with CARLA’s standard packing/unpacking.

Let (B,G,R) be the 8‑bit channels of the PNG (as read by OpenCV: BGR order). The decoded depth is:

[ d = \frac{R + 256G + 256^2 B}{256^3 - 1} \cdot 1000 ]

where d is depth in meters. A value of 0 indicates invalid / missing depth.

Minimal decoding snippet

import numpy as np, cv2

def read_depth_png(path):
    bgr = cv2.imread(path, cv2.IMREAD_COLOR).astype(np.float32)
    B, G, R = bgr[...,0], bgr[...,1], bgr[...,2]
    depth_m = (R + 256*G + 256*256*B) / (256*256*256 - 1) * 1000.0
    return depth_m

Notes

  • some sequences feature zero-padded left-right frame borders in order to align frames to a common width of 1024px while keeping the original vertical field of view
  • 13 frames have been excluded from scene4 due to reconstruction artifacts
  • scene9 (original sequence 9) has been excluded entirely due to poor reconstruction quality
  • depth maps for seq523 and seq624 part of scene2 have been edited via manual invalidation masks (set to zero) in order to remove observed errors (distant regions with false "close" depth values); we recommend performing similar masking for other scene2 sequences

Recommended Citation

If you use this dataset in academic work, please cite the TanDepth paper and the original UAVid dataset.

TanDepth (IEEE J-STARS, 2025)

@ARTICLE{TanDepth2025,
  author={Florea, Horatiu and Nedevschi, Sergiu},
  journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
  title={TanDepth: Leveraging Global DEMs for Metric Monocular Depth Estimation in UAVs},
  year={2025},
  volume={18},
  pages={5445-5459},
  doi={10.1109/JSTARS.2025.3531984}
  url={https://ieeexplore.ieee.org/abstract/document/10848130}
}

UAVid (source RGB data)

@article{LYU2020108,
    author = "Ye Lyu and George Vosselman and Gui-Song Xia and Alper Yilmaz and Michael Ying Yang",
    title = "UAVid: A semantic segmentation dataset for UAV imagery",
    journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
    volume = "165",
    pages = "108 - 119",
    year = "2020",
    issn = "0924-2716",
    doi = "https://doi.org/10.1016/j.isprsjprs.2020.05.009",
    url = {http://www.sciencedirect.com/science/article/pii/S0924271620301295},
}
Downloads last month
23