Importing Data in the 3D Multi-Sensor Fusion Labeling Tool

The Point Cloud (LiDAR) labeling tool is currently in the alpha testing phase and not yet available to be used in production. Thank you for your patience.

To import assets into the Ango Hub 3D multi-sensor fusion (3DMSF) tool , you will need to prepare a folder for each asset you wish to import. This folder must adhere to the format outlined on this page and must be located in cloud storage.

You will then place the link to that folder in a JSON you will upload onto the Ango Hub platform, either by dragging and dropping it from the UI, or from the SDK.

JSON Format

Here is a sample JSON used to import one 3DMSFT asset:

[
  {
    "data": "https://url-to-storage/folder/",
    "externalId": "my_external_id_of_choice",
    "editorType": "pct"
  }
]

The only difference from a standard import JSON is the fact that you must add the editorType property and set it to pct, and that the data URL is to a folder rather than a single file.

Folder Format

LiDAR Data (Required)

LiDAR data in the .las format must be placed in a subfolder named lidar. In the following example, and from now on for the rest of this section, we will create an example folder for an asset with two frames.

ROOT

└──lidar

   ├──00000-ca9a282c9e77460f8360f564131a8af5.las
   └──00001-39586f9d59004284a7114a68825e8eec.las

The 3DMST only supports point cloud (Radar, LiDAR) files in the .las format.

Image Data

You can add corresponding image data, in either the .jpg or .png format.

With the exception of the file extension, the images must have the same filename as their corresponding .las file.

Please place your image files in subfolders with the follow names:

ROOT

├──lidar
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.las
│  └──00001-39586f9d59004284a7114a68825e8eec.las

├──CAM_BACK
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_BACK_LEFT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_BACK_RIGHT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_FRONT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_FRONT_LEFT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

└──CAM_FRONT_RIGHT

   ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
   └──00001-39586f9d59004284a7114a68825e8eec.jpg

Calibration Data

If calibration data is available, you may provide it in two ways:

When calibration information is the same for all frames

Create a subfolder named calibration and within it, add calibration information in a file named calibration.json:

ROOT

├──lidar
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.las
│  └──00001-39586f9d59004284a7114a68825e8eec.las

├──...<image data and other subfolders>

└──calibration

   └──calibration.json

When calibration information is different for each frame

Create a subdolder called calibration and provide calibration .json data using the same filenames as their respective .las files:

ROOT

├──lidar
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.las
│  └──00001-39586f9d59004284a7114a68825e8eec.las

├──<other subfolders>

└──calibration

   └──00000-ca9a282c9e77460f8360f564131a8af5.json
      00001-39586f9d59004284a7114a68825e8eec.json

Calibration Data Format

The following is a sample calibration .json file in the format accepted by the 3DMST:

Sample calibration .json
{
    "calibration": [
        {
            "name": "las"
        },
        {
            "name": "top_left",
            "extrinsic": {
                "elements": [
                    -0.45932028878157577,
                    0.8881625791580727,
                    0.013860205513981646,
                    0,
                    0.013487377588441811,
                    0.02257518929768711,
                    -0.9996541659363805,
                    0,
                    -0.8881683190473507,
                    -0.45897450235438314,
                    -0.02234822653254135,
                    0,
                    -0.3624692129261298,
                    -0.3264286767132519,
                    -0.14261184674068425,
                    1.0000000000000002
                ]
            },
            "intrinsic": {
                "type": "pinhole",
                "focal_length": [
                    2368.3885645558503,
                    2364.775446260659
                ],
                "principal_point": [
                    1023.5,
                    767.5
                ],
                "distortion_model": "brown",                
                "distortion_coeffs": [
                    -0.18073668989736824,
                    0.12354259489517272,
                    0.0010549736922921442,
                    -0.00003499145176613645,
                    0.04489462478983391
                ],
                "cut_angle_lower": [],
                "cut_angle_upper": []
            }
        },
        {
            "name": "top_mid",
            "extrinsic": {
                "elements": [
                    0.04606459694705569,
                    0.9988330364531003,
                    0.014512690928439211,
                    0,
                    0.02776824380672721,
                    0.013242141819897059,
                    -0.9995266731387968,
                    0,
                    -0.998552441058553,
                    0.046445785275962485,
                    -0.027125845352793818,
                    0,
                    -0.07214367387671071,
                    0.21658406915669223,
                    -0.2675007838890805,
                    1
                ]
            },
            "intrinsic": {
                "type": "pinhole",
                "focal_length": [
                    2422.2559456651516,
                    2416.3771161529075
                ],
                "principal_point": [
                    1023.5,
                    767.5
                ],
                "distortion_model": "brown",
                "distortion_coeffs": [
                    -0.18556961331502048,
                    0.10583281806661132,
                    -0.001467792676586933,
                    -0.000699103163397734,
                    0.1153640967021715
                ],
                "cut_angle_lower": [],
                "cut_angle_upper": []
            }
        },
        {
            "name": "top_right",
            "extrinsic": {
                "elements": [
                    0.5257683318264426,
                    0.8506074342518913,
                    -0.005886768540918364,
                    0,
                    -0.0030026807511701883,
                    -0.005064559610083951,
                    -0.9999826669219133,
                    0,
                    -0.8506225044969986,
                    0.5257768947294758,
                    -0.00010868248799881679,
                    0,
                    -0.061373007191811796,
                    0.6639156039020208,
                    -0.13078671127573838,
                    1.0000000000000002
                ]
            },
            "intrinsic": {
                "type": "pinhole",
                "focal_length": [
                    2365.420466392311,
                    2361.2991996525857
                ],
                "principal_point": [
                    1023.5,
                    767.5
                ],
                "distortion_model": "brown",
                "distortion_coeffs": [
                    -0.17409497767201468,
                    0.08401455091460913,
                    0.00011858287852733618,
                    0.0002621776167037798,
                    0.10365871528250893
                ],
                "cut_angle_lower": [],
                "cut_angle_upper": []
            }
        }
    ]
}

Ego Vehicle Data

If present, you may include velocity data about the ego vehicle. This will enable the 3DMST "Merge Point Cloud" functionality, allowing you to see all frames at once.

Ego vehicle data must be placed in a .json file in a subfolder named ego_data. Each file must have the same filename as the related .las file.

ROOT

├──lidar
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.las
│  └──00001-39586f9d59004284a7114a68825e8eec.las

├──<other subfolders>

└──ego_data

   └──00000-ca9a282c9e77460f8360f564131a8af5.json
      00001-39586f9d59004284a7114a68825e8eec.json

Ego Vehicle Data Format

The following is a sample ego vehicle data .json file in the format accepted by the 3DMST:

Sample ego vehicle data .json
{
    "ego": {
        "utmHeading_deg": 0,
        "utmX_m": 0,
        "utmY_m": 0,
        "utmZ_m": 0,
        "transformationMatrix": [
            0.9999929255939899,
            4.728399942020017e-05,
            -0.003761186806903719,
            11.892164028919069,
            -3.5267784361846035e-05,
            0.9999948960054691,
            0.003194795642111974,
            -0.010200761971645989,
            0.0037613186725421227,
            -0.003194640392105233,
            0.9999878233031683,
            0.0471942646689083,
            0.0,
            0.0,
            0.0,
            1.0
        ],
        "timestamp_epoch_ns": 1725652171522264150
    }
}

And the following is an explanation of each field in the ego vehicle .json file to provide:

Ego vehicle data file field explanation

utmX_m: The distance the object has moved along the x-axis (in meters) with respect to the 1st frame.

utmY_m: The distance the object has moved along the y-axis (in meters) with respect to the 1st frame.

utmZ_m: The distance the object has moved along the z-axis (in meters) with respect to the 1st frame.

utmHeading_deg: Angle of Z rotation (Yaw)

transformationMatrix: Current frame lidar coordinate system displacement from the reference frame lidar coordinate system. The matrix is in row major format.

Element Order : [ R11, R12, R13, Tx,
                  R21, R22, R23, Ty,
                  R31, R32, R33, Tz,
                  0,   0,   0,   1 ]

If we have complete transformation Matrix details we ignore utmX_m, utmY_m, utmZ_m and utmHeading_deg. Hence in the sample we put those value as zero.

timestamp_epoch_ns: timestamp of this frame capture in nanoseconds.


A folder with all optional data included would therefore look like the following:

ROOT

├──lidar
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.las
│  └──00001-39586f9d59004284a7114a68825e8eec.las

├──CAM_BACK
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_BACK_LEFT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_BACK_RIGHT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_FRONT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_FRONT_LEFT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──CAM_FRONT_RIGHT
│  │
│  ├──00000-ca9a282c9e77460f8360f564131a8af5.jpg
│  └──00001-39586f9d59004284a7114a68825e8eec.jpg

├──ego_data
│  │
│  └──00000-ca9a282c9e77460f8360f564131a8af5.json
│     00001-39586f9d59004284a7114a68825e8eec.json

└──calibration

   └──00000-ca9a282c9e77460f8360f564131a8af5.json
      00001-39586f9d59004284a7114a68825e8eec.json

Last updated