Unlabeled datasets consists of unlabeled datapoints with or without model results/predictions
Overview
Unlabeled dataset can be uploaded to utilize Aquarium's Collection Campaign functionality. Collection Campaigns allow you to group together elements within a dataset, and use that group to search through the unlabeled dataset to find similar examples.
Uploading Unlabeled Datasets is a very similar process to uploading Labeled Datasets. But instead of LabeledDataset and LabeledFrame, you will use UnlabeledDataset and UnlabeledFrame.Because of this, much of the upload documentation will look similar to the documentation for Labeled Datasets.
Unlabeled datasets can also contain inferences. For example, your team collected images for your specific task and ran them through a model to produce some crops, but the image has not been labeled. In this way, the upload still looks very similar to a labeled dataset upload, but you are using the model predictions to populate the required fields like 'classification'.
Steps to upload an unlabeled dataset:
Ensure labeled dataset has already been uploaded
(Optional) Acquire inferences for the unlabeled data
Wait for labeled dataset to complete the upload and find the embedding version (see next section)
Create UnlabeledDataset
Add UnlabeledFrames that have been created with the inference data to the UnlabeledDataset
Upload UnlabeledDataset
Locating The Embedding Version
An important difference in uploading an unlabeled dataset is adding the embedding version during data upload. You'll need the embedding version that was issued for the base labeled dataset you are working with. You add this in as a property in the .create_dataset() call.
You will call .create_dataset() just like you did when uploading a LabeledDataset object, but now you'll call it and pass in your UnlabeledDataset object and the embedding version:
Because you need the embedding version to upload unlabeled data, you need to wait until the value has been generated before you can upload your unlabeled dataset.
Where Do I Find The Embedding Version?
In order for similarity search to work, your search dataset and your seed dataset must have compatible embedding spaces. To specify this explicitly, you will be using embedding versions (represented by UUIDs).
Note that the Get Version button will be disabled if your seed dataset is still post-processing. Usually the upload order is labeled dataset, then unlabeled dataset to retrieve the embedding version.
To determine the embedding version for your seed dataset, go to the Project Details page and select the Embeddings tab:
Select the name of your seed dataset from the dropdown and click Get Version:
The UUID that appears is the embedding version that you will use in the following section, when uploading an unlabeled indexed dataset via the Python client.
Creating and Formatting Your Unlabeled Data
To ingest a unlabeled dataset, there are two main objects you'll work with:
For each datapoint, we create a UnlabeledFrame and add it to the UnlabeledDataset in order to create the dataset that we upload into Aquarium.
This usually means looping through your data and creating unlabeled frames and then adding them to your unlabeled dataset.
If you have generated your own embeddings and want to use them during your labeled data uploads, please also see this section for additional guidance!
Defining these objects looks like this:
# defining the UnlabeledDataset objectunlabeled_dataset = al.UnabeledDataset()# defining UnlabeledFrame object# frames must have a unique id# unlabeled frames should contain new data like images or point clouds# so the frame id should be completely unique and uncoupled from any # labeled or inference dataframe_id = FILE_OR_IMAGE_NAME.split('.jpg')[0]unlabeled_frame = al.UnlabeledFrame(frame_id=frame_id)
Once you've defined your frame, we need to associate some data with it! In the next sections, we show you how to add your main form of input data to the frame (images, point clouds, etc).
Adding Data to Your Unlabeled Frame
Each UnlabeledFrame in your dataset can contain one or more input pieces of data. In many computer vision tasks, this may be a single image. In a robotics or self-driving task, this may be a full suite of camera images, lidar point clouds, and radar scans.
Here are some common data types, their expected formats, and how to work with them in Aquarium:
unlabeled_frame.add_image(# A URL to load the image by image_url='',# A URL to a compressed form of the image for faster loading in browsers.# It must be the same pixel dimensions as the original image. preview_url='',# Optional: ISO formatted date-time string date_captured='',# Optional: width of image in pixels, will be inferred otherwise width=1280,# Optional: height of image in pixels, will be inferred otherwise height=720)
unlabeled_frame.add_user_metadata('deployment_id', value)# In the case of nullable values, you can also provide an explicit typeunlabeled_frame.add_user_metadata('nullable_field', maybe_null, val_type='int')
Metadata is indexed on Aquarium's side, so make sure you aren't sharing any private information.
Geospatial metadata is indexed on Aquarium's side, so make sure you aren't sharing any private information.
Aquarium supports audio files that are natively playable in browsers. For maximum compatibility, we recommend providing .mp3 files.
If your model also works against spectrograms, you can provide both an audio data input and an image. The Aquarium UI will then present both alongside each other.
unlabeled_frame.add_audio(# A URL to load the mp3 file from audio_url='',# Optional: ISO formatted date-time string date_captured='')
Unfortunately, point cloud formats haven't standardized as much as image data yet. These are our currently supported formats, but please reach out if you have a different representation. We'd be more than happy to support your in-house representation.
PCL / PCD
Aquarium supports the *.pcd file format used by the PCL library, including the binary and compressed binary encodings. Numeric values for the following column names are expected: x, y, z, intensity (optional), range (optional).
unlabeled_frame.add_point_cloud_pcd( pcd_url='',# Optional: If your point cloud is relative to a specific# coordinate frame, you can reference it by name here. coord_frame_id="",# Optional: ISO formatted date-time string date_captured='')
KITTI-like binary files
Similar to the raw KITTI lidar formats, we can also take in raw, dense binary files of little-endian values. This is in many ways more fragile, but also requires no third party libraries.
frame.add_point_cloud_bins(# URL for the point positions: # float32 [x1, y1, z1, x2, y2, z2, ...] point_cloud_url='',# URL for the point intensities:# unsigned int32 [i1, i2, i3, ...] intensity_url='',# URL for the point ranges# float32 [r1, r2, r3, ...] range_url'',# Optional: If your point cloud is relative to a specific# coordinate frame, you can reference it by name here. coord_frame_id="",# Optional: ISO formatted date-time string date_captured='')
Aquarium supports rendering basic 3D geometry meshes. Please reach out if you have any needs that aren't captured here.
.OBJ (Wavefront) Files
unlabeled_frame.add_obj(# A url to a *.obj formatted text obj_url='',# Optional: If your object geometry is relative to a specific# coordinate frame, you can reference it by name here. coord_frame_id="",# Optional: ISO formatted date-time string date_captured='')
In robotics applications, you often have multiple sensors in multiple coordinate frames. Aquarium supports specifying different coordinate frames, which will be used when interpreting 3D data inputs and labels.
unlabeled_frame.add_coordinate_frame_3d( coord_frame_id='robot_ego_frame',# Position offset of this coordinate frame position={'x': 0, 'y': 0, 'z': 0},# Rotation/Orientation of this coordinate frame,# represented as a quaternion orientation={'w': 1, 'x': 0, 'y': 0, 'z': 0},# Optional: string ID of the parent coordinate frame# that this one is relative to. parent_frame_id="")
Adding Inferences to Your Unlabeled Frame
Each unlabeled frame requires at least one label in order to upload. However in the case of unlabeled data, the data populating the label is model inference data generated on your unlabeled samples.
In the case where you don't have the ability to generate inferences for the unlabeled data, you will have to create some kind of fake "label" depending on what your task is.
For example, a fake classification or an arbitrary bounding box.
For example, when attaching inference results to the unlabeled frame, we use the functions for labels, not inferences, even though the classifications/bounding boxes/etc are being populated with inference data. For example we use:
unlabeled_frame.add_label_2d_classification( label_id="unique_id_for_this_label", classification=PREDICTED_INFERENCE_CLASSIFICATION)unlabeled_frame.add_label_2d_bbox(# A unique id across all other labels in this dataset label_id='unique_id_for_this_label', classification=PREDICTED_INFERENCE_CLASSIFICATION,# Coordinates are in absolute pixel space top=INFERENCE_VALUE_TOP, left=INFERENCE_VALUE_LEFT, width=INFERENCE_VALUE_WIDTH, height=INFERENCE_VALUE_HEIGHT)
Here are some common label types, their expected formats, and how to work with them in Aquarium:
# Standard 2D caseunlabeled_frame.add_label_2d_classification(# A unique id across all other labels in this dataset label_id='unique_id_for_this_label', classification='dog')# 3D classificationunlabeled_frame.add_label_3d_classification(# A unique id across all other labels in this dataset label_id='unique_id_for_this_label', classification='dog', # Optional, defaults to implicit WORLD coordinate frame coord_frame_id='robot_ego_frame',)
unlabeled_frame.add_label_2d_bbox(# A unique id across all other labels in this dataset label_id='unique_id_for_this_label', classification='dog',# Coordinates are in absolute pixel space top=200, left=300, width=250, height=150)
Aquarium supports 3D cuboid labels, with 6-DOF position and orientation.
unlabeled_frame.add_label_3d_cuboid( label_id="unique_id_for_this_label", classification="car",# XYZ dimensions of this cuboid dimensions=[1.0, 0.5, 0.5],# XYZ position of the center of this object position=[2.0, 2.0, 1.0],# An XYZW ordered object rotation quaternion rotation=[0.0, 0.0, 0.0, 1.0],# Optional: If your cuboid is relative to a specific# coordinate frame, you can reference it by name here. coord_frame_id="robot_ego_frame")
3D Cuboid Image Projection (Optional)
If you have images attached to your frames alongside 3D cuboids, we can project the cuboids onto the images! First you will need to add a 2D coordinate frame:
frame.add_coordinate_frame_2d(# String identifier for this coordinate frame. coord_frame_id="camera_1_coordinate_frame",# focal length x in pixels. fx=fx,# focal length y in pixels. fy=fy,# Optional: Either "fisheye" for the fisheye model,# or "brown_conrady" for the pinhole model with# Brown-Conrady distortion. camera_model=camera_model# Optional: Dict of the form {x, y, z}. position=position,# Optional: Quaternion rotation dict of the form {w, x, y, z}. orientation=orientation,# Optional: 4x4 row major order camera matrix mapping# 3d world space to camera space (x right, y down, z forward).# Keep in mind, if you pass in the camera matrix it will stack# on top of the position/orientation you pass in as well. This# is only needed if you cannot properly represent your camera# using the position/orientation parameters. camera_matrix=camera_matrix,# Optional: optical center pixel x coordinate. cx=cx,# Optional: optical center pixel y coordinate. cy=cy,# Optional: k1 radial distortion coefficient (Brown-Conrady, fisheye). k1=k1,# Optional: k2 radial distortion coefficient (Brown-Conrady, fisheye). k2=k2,# Optional: k3 radial distortion coefficient (Brown-Conrady, fisheye). k3-k3,# Optional: k4 radial distortion coefficient (Brown-Conrady, fisheye). k4=k4,# Optional: k5 radial distortion coefficient (Brown-Conrady). k5=k5,# Optional: k6 radial distortion coefficient (Brown-Conrady). k6=k6,# Optional: p1 tangential distortion coefficient (Brown-Conrady). p1=p1,# Optional: p2 tangential distortion coefficient (Brown-Conrady). p2=p2,# Optional: s1 thin prism distortion coefficient (Brown-Conrady). s1=s1,# Optional: s2 thin prism distortion coefficient (Brown-Conrady). s2=s2,# Optional: s3 thin prism distortion coefficient (Brown-Conrady). s3=s3,# Optional: s4 thin prism distortion coefficient (Brown-Conrady). s4=s4,# Optional: camera skew coefficient (fisheye). skew=skew,# Optional: String id of the parent coordinate frame. parent_frame_id=parent_frame_id)
View the Python API docs for more information on the parameters to add_coordinate_frame_2d
And then you can use this 2D coordinate frame when adding your image to the frame:
frame.add_image(# A unique name to refer to this image by sensor_id='camera_1',# A URL to load the image by image_url='',# A URL to a compressed form of the image for faster loading in browsers.# It must be the same pixel dimensions as the original image. preview_url='',# Optional: ISO formatted date-time string date_captured='',# Optional: width of image in pixels, will be inferred otherwise width=1280,# Optional: height of image in pixels, will be inferred otherwise height=720# Optional: 2D coordinate frame to use for this image coord_frame_id="camera_1_coordinate_frame",)
Now you will be able to see your 3D cuboids projected onto your images and have camera distortion properly accounted for!
2D Semantic Segmentation labels are represented by an image mask, where each pixel is assigned an integer value in the range of [0,255]. For efficient representation across both servers and browsers, Aquarium expects label masks to be encoded as grey-scale PNGs of the same dimension as the underlying image.
If you have your label masks in the form of a numpy ndarray, we recommend using the pillow python library to convert it into a PNG:
! pip3 install pillowfrom PIL import Image...# 2D array, where each value is [0,255] corresponding to a class_id# in the project's label_class_map.int_arr = your_2d_ndarray.astype('uint8')Image.fromarray(int_arr).save(f"{imagename}.png")
Because this will be loaded dynamically by the web-app for visualization, this image mask will need to be hosted somewhere. To upload it as an asset to Aquarium, you can use the following utility:
This utility hosts and stores a copy of the label mask (not the underlying RGB image) with Aquarium. If you would like your label masks to remain outside of Aquarium, chat with us and we'll help figure out a good setup.
Now, we add the label to the frame like any other label type:
unlabeled_frame.add_label_2d_semseg(# The sensor id of the image this label corresponds to sensor_id='some_camera',# A unique id across all other labels in this dataset label_id='unique_id_for_this_label',# Expected to be a PNG, with values in [0,255] that correspond# to the class_id of classes in the label_class_map mask_url='url_to_greyscale_png')
Aquarium represents instance segmentation labels as 2D Polygon Lists. Each label is represented by one or more polygons, which do not need to be connected.
unlabeled_frame.add_label_2d_polygon_list(# A unique id across all other labels in this dataset label_id='unique_id_for_this_label', classification='dog',# All coordinates are in absolute pixel space## These are polygon vertices, not a line string. This means# that no vertices are duplicated in the lists. polygons=[ {'vertices': [(x1, y1), (x2, y2), ...]}, {'vertices': [(x1, y1), (x2, y2), ...]} ],# Optional: indicate the center position of the object center: [center_x, center_y])
Putting It All Together
We offer a variety of options when it comes to working with unlabeled data and we've elaborated on some of the more nuanced operations in another section below.
In the API docs you can see the other operations associated with a UnlabeledFrame.
Now that we've discussed the general steps for adding labeled data, here is an example of what this would look like for a 2D classification example would look like this:
# Add an image to the frameimage_url ="https://storage.googleapis.com/aquarium-public/quickstart/pets/imgs/"+ entry['file_name']unlabeled_frame.add_image(image_url=image_url)# Add the ground truth classification label to the framelabel_id = frame_id +'_gt'unlabeled_frame.add_label_2d_classification( label_id=label_id, classification="CLASSIFICATION_FROM_INFERENCE")# once you have created the frame, add it to the dataset you createdunlabeled_dataset.add_frame(unlabeled_frame)
Uploading Your Unlabeled Dataset
Now that we have everything all set up, let's submit your new labeled dataset to Aquarium!
Aquarium does some processing of your data, like indexing metadata and possibly calculating embeddings, so after they're submitted so you may see a delay before they show up in the UI. You can view some examples of what to expect as well as troubleshooting your upload here!
Submitting Your Dataset
You can submit your UnlabeledDataset to be uploaded in to Aquarium by calling .create_dataset(). It is the same function we use with LabeledDataset uploads.
Please use a unique name for your unlabeled dataset. Do not use the same name as an existing labeled dataset.
This is an example of what the create_dataset() call will look like: