Importing Data in the 3D Multi-Sensor Fusion Labeling Tool

circle-info

3DMSFT Data must be in a bucket you have connected to Ango Hub through storage integrations, and Ango Hub must have read and write permissions to the bucket.

circle-info

To import pre-annotations, please follow this documentation page insteadenvelope.

To import assets into the Ango Hub 3D multi-sensor fusion (3DMSF) tool , you will need to prepare a folder for each asset you wish to import. This folder must adhere to the format outlined on this page and must be located in a cloud storage bucket you have previously connected to Ango Hub using storage integrations.

You will then place the link to that folder in a JSON you will upload onto the Ango Hub platform, either by dragging and dropping it from the UI, or from the SDK, or from Ango Hub's File Explorer.

JSON Format

Here is a sample JSON used to import one 3DMSFT asset:

[
  {
    "data": "storage-name/folder/folder",
    "externalId": "my_external_id_of_choice",
    "editorType": "pct"
  }
]

The only difference from a standard import JSON is the fact that you must add the editorType property and set it to pct, and that the data URL is to a folder rather than a single file.

Folder Format

LiDAR Data (Required)

LiDAR data in the .las format must be placed in a subfolder named lidar. In the following example, and from now on for the rest of this section, we will create an example folder for an asset with two frames.

circle-info

The 3DMST only supports point cloud (Radar, LiDAR) files in the .las format.

Image Data

You can add corresponding image data, in either the .jpg or .png format.

With the exception of the file extension, the images must have the same filename as their corresponding .las file.

Please place your image files in subfolders with the follow names:

Calibration Data

If calibration data is available, you may provide it in two ways:

When calibration information is the same for all frames

Create a subfolder named calibration and within it, add calibration information in a file named calibration.json:

When calibration information is different for each frame

Create a subdolder called calibration and provide calibration .json data using the same filenames as their respective .las files:

Calibration Data Format

The following is a sample calibration .json file in the format accepted by the 3DMSF:

chevron-rightSample calibration .jsonhashtag

And the following is an explanation of each field in the calibration .json file to provide:

chevron-rightSensor Identifierhashtag

name

Type: string

Description: Unique identifier of the sensor within the system.

Examples:

  • las: Reference sensor (typically LiDAR)

  • top_left: Camera mounted on the top-left of the vehicle

  • top_mid: Camera mounted on the top-middle of the vehicle

  • top_right: Camera mounted on the top-right of the vehicle

chevron-rightExtrinsic Parametershashtag

extrinsic

Defines the rigid body transformation from the sensor coordinate system to the reference coordinate system (usually the LiDAR frame).

extrinsic.elements

Type: array of 16 numbers

Description: A 4 × 4 homogeneous transformation matrix (cam_to_velo) stored in column-major order. This matrix encodes the rotation and translation of the sensor relative to the reference frame.

where:

  • Rij : Elements of the 3 × 3 rotation matrix

  • Tx, Ty, Tz : Translation offsets (in meters)

Interpretation: This transformation converts a 3D point expressed in the sensor coordinate frame into the reference (LiDAR) coordinate frame.

chevron-rightIntrinsic Parametershashtag

intrinsic

Defines the internal camera model used to project 3D points onto the 2D image plane.


1. Camera Model

intrinsic.type

Type: string

Description: Camera projection model.

Supported values:

  • "pinhole" : Standard pinhole camera model


2. Focal Length

intrinsic.focal_length

Type: array [fx, fy]

Description: Focal lengths of the camera expressed in pixels.

  • fx : Focal length in the x-direction

  • fy : Focal length in the y-direction


3. Principal Point

intrinsic.principal_point

Type: array [cx, cy]

Description: Optical center of the image in pixel coordinates.

  • cx : x-coordinate of the principal point

  • cy : y-coordinate of the principal point


4. Distortion Model

intrinsic.distortion_model

Type: string

Description: Lens distortion model used by the camera.

Supported values:

  • "brown" : Brown–Conrady distortion model


5. Distortion Coefficients

intrinsic.distortion_coeffs

Type: array

Description: Distortion coefficients corresponding to the selected distortion model.

For the Brown–Conrady model, coefficients are ordered as:

[k1, k2, p1, p2, k3]

where:

  • k1, k2, k3 : Radial distortion coefficients

  • p1, p2 : Tangential distortion coefficients


6. Angular Cut-Off (Optional)

intrinsic.cut_angle_lower, intrinsic.cut_angle_upper

Type: array

Description: Optional angular limits used to restrict projection based on viewing angle.

  • If set to an empty array [], no angular filtering is applied.

  • If specified, only points within the defined angular range are projected

Ego Vehicle Data

If present, you may include velocity data about the ego vehicle. This will enable the 3DMST "Merge Point Cloud" functionality, allowing you to see all frames at once.

Ego vehicle data must be placed in a .json file in a subfolder named ego_data. Each file must have the same filename as the related .las file.

Ego Vehicle Data Format

The following is a sample ego vehicle data .json file in the format accepted by the 3DMST:

chevron-rightSample ego vehicle data .jsonhashtag

And the following is an explanation of each field in the ego vehicle .json file to provide:

chevron-rightEgo vehicle data file field explanationhashtag

utmX_m: The distance the object has moved along the x-axis (in meters) with respect to the 1st frame.

utmY_m: The distance the object has moved along the y-axis (in meters) with respect to the 1st frame.

utmZ_m: The distance the object has moved along the z-axis (in meters) with respect to the 1st frame.

utmHeading_deg: Angle of Z rotation (Yaw)

transformationMatrix: Current frame lidar coordinate system displacement from the reference frame lidar coordinate system. The matrix is in row major format.

If we have complete transformation Matrix details we ignore utmX_m, utmY_m, utmZ_m and utmHeading_deg. Hence in the sample we put those value as zero.

timestamp_epoch_ns: timestamp of this frame capture in nanoseconds.


A folder with all optional data included would therefore look like the following:

Importing PCT data using the File Explorer

From your project, click on Add Data, then enter the File Explorer tab. Navigate to the folder containing your PCT sub-folders and data. Click on it to select it. Enable the PCT Upload toggle, then click on Upload. Your folder will be imported as PCT data.

Importing Pre-labelled Annotations

Create a Folder for Pre-Labelled Annotations

  1. Prepare the data folder structure.

  2. Store the prelabled annotations in the lidar_annotation folder.

For example, if you have 100 frames, the corresponding files should be named as: 1.json, 2.json, 3.json, and so on up to 100.json.

circle-info

Prelabled annotations are stored as JSON files, one json per frame

The JSON schema needed to create prelabled files (supports cuboid, 2D bbox, 3Dpolyline).

chevron-rightjson schema for prelabled file:hashtag

Sample/snippet of Cuboid, 2D Bbox, 3D Polyline

Last updated