Confusion Matrix Scripting
In addition to exploring model failures in the UI, you can also script directly against the entries that make up the confusion matrix.
Full python client docs are available here, and a brief example follows.
PROJECT = 'YOUR_PROJECT'
INF_ID = 'experiment_1'
DATASET_ID = 'dataset_name'
al_client = al.Client()
al_client.set_credentials(api_key="YOUR_API_KEY")
metrics_manager = al_client.get_metrics_manager(PROJECT)
# Specify your query here. The union of queries will be returned.
# See the python client docs for the exhaustive list
queries = [
# All Confusions
metrics_manager.make_confusions_query(),
# A specific Query
metrics_manager.make_cell_query('gt_class', 'inf_class'),
# A full row / column
metrics_manager.make_confused_as_query('inf_class')
]
confusions_opts = {
'confidence_threshold': 0.5,
'iou_threshold': 0.5,
'queries': queries,
'ordering': metrics_manager.ORDER_CONF_DESC
}
confusions = metrics_manager.fetch_confusions(DATASET_ID, INF_ID, confusions_opts) # type: ignore
print('num_results: ', len(confusions['rows']))
print(confusions['rows'][0])
Last updated
Was this helpful?