Labeled datasets include all of your ground truth labels for your dataset
Overview
Labeled datasets contain your ground truth/labels. A labeled dataset belongs to a Project and consist of multiple labeled frames.
A labeled frame is one logical "frame" of data, such as an image from a camera stream. They can contain one or more media/sensor inputs, zero or more ground truth labels, and arbitrary user provided metadata.
For example, in a 2D classification case, a frame would contain the image, labels, and all associated metadata.
Whereas with a 3D object detection use case, a frame can contain images, point clouds, labels, and metadata too.
For real examples of uploading labeled data, please look at our quickstart guides!
Prerequisites to Uploading Labeled Data
In order to ensure the following steps will work smoothly, this guide assumes you have already:
Have URLs for your raw data (images, point clouds, etc.)
See our data sharing docs for more details on URL requirements.
Have access to your labels for your raw data
To view your labeled data once uploaded, you will have to make sure that you have selected and set up the appropriate data sharing method for your team.
Creating and Formatting Your Labeled Data
To ingest a labeled dataset, there are two main objects you'll work with:
For each datapoint, we create a LabeledFrame and add it to the LabeledDataset in order to create the dataset that we upload into Aquarium.
This usually means looping through your data and creating LabeledFrames to add to the LabeledDataset object.
If you have generated your own embeddings and want to use them during your labeled data uploads, please also see this section for additional guidance!
Defining these objects looks like this:
labeled_dataset = al.LabeledDataset()for frame_id, frame_data in my_list_of_data:# Frames must have a unique frame_id frame = al.LabeledFrame(frame_id=frame_id) ... labeled_dataset.add_frame(frame)
Once you've defined your frame, we need to associate some data with it! In the next sections, we show you how to add your main form of input data to the frame (images, point clouds, etc), and then associate the ground truth labels to that frame.
Adding Data to Your Labeled Frame
Each LabeledFrame in your dataset can contain one or more input pieces of data. In many computer vision tasks, this may be a single image. In a robotics or self-driving task, this may be a full suite of camera images, lidar point clouds, and radar scans.
Here are some common data types, their expected formats, and how to work with them in Aquarium:
Your ML task utilizes images and you would like to add an image to your labeled data
labeled_frame.add_user_metadata(
key = 'deployment_id',
val = value
)
# In the case of nullable values, you can also provide an explicit type
labeled_frame.add_user_metadata(
key = 'nullable_field',
val = maybe_null,
val_type='int'
)
Relevant Function Parameter Descriptions
Geospatial metadata is indexed on Aquarium's side, so make sure you aren't sharing any private information.
Your ML task utilizes geospatial data and you want to add the data as context for analysis and filtering
# EPSG:4326 WGS84 Latitute Longitude coordinateslabeled_frame.add_geo_latlong_data( lat =37.044030, lon =-112.526130)
Relevant Function Parameter Descriptions
If your model also works against spectrograms, you can provide both an audio data input and an image. The Aquarium UI will then present both alongside each other.
Aquarium supports audio files that are natively playable in browsers. For maximum compatibility, we recommend providing .mp3 files.
labeled_frame.add_audio(# A URL to load the mp3 file from audio_url='',# Optional: ISO formatted date-time string date_captured='')
Relevant Function Parameter Descriptions
Because point cloud formats haven't standardized as much as image data yet, we currently supported two formats. Please reach out if you have a different representation as we'd be more than happy to support your in-house representation.
PCL / PCD
Aquarium supports the *.pcd file format used by the PCL library, including the binary and compressed binary encodings. Numeric values for the following column names are expected: x, y, z, intensity (optional), range (optional).
Similar to the raw KITTI lidar formats, we can also take in raw, dense binary files of little-endian values. This is in many ways more fragile, but also requires no third party libraries.
In robotics applications, you often have multiple sensors in multiple coordinate frames. Aquarium supports specifying different coordinate frames, which will be used when interpreting 3D data inputs and labels.
# top, left, width, and height are in pixelslabeled_frame.add_label_2d_bbox( label_id='unique_id_for_this_label', classification='dog', top=200, left=300, width=250, height=150)
Relevant Function Parameter Descriptions
Aquarium supports 3D cuboid labels, with 6-DOF position and orientation.
If you have images attached to your frames alongside 3D cuboids, we can project the cuboids onto the images! First you will need to add a 2D coordinate frame:
frame.add_coordinate_frame_2d(# String identifier for this coordinate frame. coord_frame_id="camera_1_coordinate_frame",# focal length x in pixels. fx=fx,# focal length y in pixels. fy=fy,# Optional: Either "fisheye" for the fisheye model,# or "brown_conrady" for the pinhole model with# Brown-Conrady distortion. camera_model=camera_model# Optional: Dict of the form {x, y, z}. position=position,# Optional: Quaternion rotation dict of the form {w, x, y, z}. orientation=orientation,# Optional: 4x4 row major order camera matrix mapping# 3d world space to camera space (x right, y down, z forward).# Keep in mind, if you pass in the camera matrix it will stack# on top of the position/orientation you pass in as well. This# is only needed if you cannot properly represent your camera# using the position/orientation parameters. camera_matrix=camera_matrix,# Optional: optical center pixel x coordinate. cx=cx,# Optional: optical center pixel y coordinate. cy=cy,# Optional: k1 radial distortion coefficient (Brown-Conrady, fisheye). k1=k1,# Optional: k2 radial distortion coefficient (Brown-Conrady, fisheye). k2=k2,# Optional: k3 radial distortion coefficient (Brown-Conrady, fisheye). k3-k3,# Optional: k4 radial distortion coefficient (Brown-Conrady, fisheye). k4=k4,# Optional: k5 radial distortion coefficient (Brown-Conrady). k5=k5,# Optional: k6 radial distortion coefficient (Brown-Conrady). k6=k6,# Optional: p1 tangential distortion coefficient (Brown-Conrady). p1=p1,# Optional: p2 tangential distortion coefficient (Brown-Conrady). p2=p2,# Optional: s1 thin prism distortion coefficient (Brown-Conrady). s1=s1,# Optional: s2 thin prism distortion coefficient (Brown-Conrady). s2=s2,# Optional: s3 thin prism distortion coefficient (Brown-Conrady). s3=s3,# Optional: s4 thin prism distortion coefficient (Brown-Conrady). s4=s4,# Optional: camera skew coefficient (fisheye). skew=skew,# Optional: String id of the parent coordinate frame. parent_frame_id=parent_frame_id)
View the Python API docs for more information on the parameters to add_coordinate_frame_2d
And then you can use this 2D coordinate frame when adding your image to the frame:
frame.add_image(# A unique name to refer to this image by sensor_id='camera_1',# A URL to load the image by image_url='',# A URL to a compressed form of the image for faster loading in browsers.# It must be the same pixel dimensions as the original image. preview_url='',# Optional: ISO formatted date-time string date_captured='',# Optional: width of image in pixels, will be inferred otherwise width=1280,# Optional: height of image in pixels, will be inferred otherwise height=720# Optional: 2D coordinate frame to use for this image coord_frame_id="camera_1_coordinate_frame",)
Now you will be able to see your 3D cuboids projected onto your images and have camera distortion properly accounted for!
2D Semantic Segmentation labels are represented by an image mask, where each pixel is assigned an integer value in the range of [0,255]. For efficient representation across both servers and browsers, Aquarium expects label masks to be encoded as grey-scale PNGs of the same dimension as the underlying image.
If you have your label masks in the form of a numpy ndarray, we recommend using the pillow python library to convert it into a PNG:
! pip3 install pillowfrom PIL import Image...# 2D array, where each value is [0,255] corresponding to a class_id# in the project's label_class_map.int_arr = your_2d_ndarray.astype('uint8')Image.fromarray(int_arr).save(f"{imagename}.png")
Because this will be loaded dynamically by the web-app for visualization, this image mask will need to be hosted somewhere. To upload it as an asset to Aquarium, you can use the following utility:
# The image mask needs to be a grey-scale PNG mask_url = al_client.upload_asset_from_filepath( project_id ='', dataset_id ='', filepath ='')
Relevant Function Parameter Descriptions
This utility hosts and stores a copy of the label mask (not the underlying RGB image) with Aquarium. If you would like your label masks to remain outside of Aquarium, chat with us and we'll help figure out a good setup.
Now, we add the label to the frame like any other label type:
frame.add_label_2d_semseg(# The sensor id of the image this label corresponds to sensor_id='some_camera',# A unique id across all other labels in this dataset label_id='unique_id_for_this_label',# Expected to be a PNG, with values in [0,255] that correspond# to the class_id of classes in the label_class_map mask_url='url_to_greyscale_png')
Relevant Function Parameter Descriptions
Aquarium represents instance segmentation labels as 2D Polygon Lists. Each label is represented by one or more polygons, which do not need to be connected.
When you add metadata fields to your Label, we need to take one extra step so that you can query those fields and search for them in the Analysis view!
Add the code below to your script, you can add it in right after you call create_dataset():
# next section goes into detail on how to call this# create dataset before updating metadata schemaal_client.create_dataset( PROJECT_NAME, DATASET_NAME, dataset=labeled_dataset)# this method is a list of dict objects where you provide the # name of the field and the type of field al_client.update_dataset_object_metadata_schema(AL_PROJECT, AL_DATASET, [ {"name": 'METADATA_FIELD_NAME_1', "type": "STRING"}, {"name": 'METADATA_FIELD_NAME_2', "type": "STRING"}, {"name": 'METADATA_FIELD_NAME_1', "type": "STRING"} ])
You can also run update_dataset_object_metadata_schema() on it's own after an upload to update the metadata to be queryable.
You can run this snippet on it's own after an upload:
In the API docs you can see the other operations associated with a LabeledFrame.
Now that we've discussed the general steps for adding labeled data, here is an example of what this would look like for a 2D classification example would look like this:
# Add an image to the frameimage_url ="https://storage.googleapis.com/aquarium-public/quickstart/pets/imgs/"+ entry['file_name']labeled_frame.add_image(image_url=image_url)# Add the ground truth classification label to the framelabel_id = frame_id +'_gt'labeled_frame.add_label_2d_classification( label_id=label_id, classification=entry['class_name'])# once you have created the frame, add it to the dataset you createdlabeled_dataset.add_frame(labeled_frame)
Uploading Your Labeled Dataset
Now that we have everything all set up, let's submit your new labeled dataset to Aquarium!
Aquarium does some processing of your data, like indexing metadata and possibly calculating embeddings, so after they're submitted so you may see a delay before they show up in the UI. You can view some examples of what to expect as well as troubleshooting your upload here!
Submitting Your Dataset
You can submit your LabeledDataset to be uploaded in to Aquarium by calling .create_dataset().
To spot check our data immediately, we can set the preview_first_frame flag toTrue and see a link in the console to a preview frame allows you to make sure data and labels look right.
This is an example of what the create_dataset() call will look like:
DATASET_NAME ='labels_v1'# In order to create a dataset in Aquarium you must provide# name you would like for your project# name you would like for your labeled dataset# the LabeledDataset object you have created and added frames toal_client.create_dataset( PROJECT_NAME, DATASET_NAME, dataset=labeled_dataset)
After kicking off your dataset, it can take anywhere from minutes to multiple hours depending on your dataset size.
You can monitor your uploads under the "Streaming Uploads" tab in the project view. Here is a guide on how to find that page.
Once completed within Aquarium on the Project page, you'll be able to see your project with an updated count of how many labeled datasets have been added to the Project (the count also includes then number of unlabeled datasets).
Additional Features
Multiple Sensor IDs
Sensor IDs are used reference data points that exist on a frame. They are usually omitted in frames and labels, but become necessary if there exists more than one type of data point on a single frame. A good example of this is a frame with multiple camera view points.
labeled_frame.add_image(sensor_id='camera_front', image_url='')labeled_frame.add_image(sensor_id='camera_right', image_url='')labeled_frame.add_image(sensor_id='camera_left', image_url='')# 2D BBOX label on the `camera_front` imagelabeled_frame.add_label_2d_bbox( sensor_id='camera_front', label_id='unique_id_for_this_label', classification='dog', top=200, left=300, width=250, height=150)# 2D BBOX label on the `camera_left` imagelabeled_frame.add_label_2d_bbox( sensor_id='camera_left', label_id='unique_id_for_this_label', classification='cat', top=200, left=300, width=250, height=150)# Inferences MUST match the same sensor id as the# corresponding base frame sensor id.inference_frame.add_inference_2d_bbox( sensor_id='camera_front', label_id='abcd_inference', classification='cat', top=200, left=300, width=250, height=150, confidence=0.85)
Quickstart Examples
For examples of how to upload labeled datasets, check out our quickstart examples.