Export Labels

Export labeled datasets to use in ML projects

Exporting Labels

  1. In your labeling project, navigate to the "Review" sub-tab

  2. Click the "Export Project" button the top-right

  3. Select desired export type. See the next section for the supported export formats.

  4. Decide how you want to split your dataset.

  5. If there's an option, select what label types to export (rectangles, polygons, etc).

  6. If there's an option, select which label statuses to export (Submitted, Approved, etc). See Approving or Rejecting labels.

  7. Click "Export Now"

You will be notified by email and in-app notification when your export is ready for download.

Supported Data Export Formats

Sense supports several popular formats for exporting labeled datasets.

  • Sense's own JSON format, used to import project data into another Sense account.

  • Create ML - Apple's machine learning model creation and training framework. Only Rectangles, Polygons, and Classification label types can be exported to this format.

  • COCO - a large-scale object detection, segmentation, and captioning dataset. All label types supported by HyperLabel can be exported to this format.

  • YOLO - a real-time object detection algorithm. Only Rectangles and Polygons can be exported to this format.

  • Pascal VOC - “Pattern Analysis, Statistical Modeling and Computational Learning Visual Object Classes” format is the input to the Pascal object detector. Rectangle and Polygon (converted to Rectangle) are supported.

Split Datasets

Sense allows you to utilize dataset splits when exporting your labels.

Split Dataset options

Why split datasets?

When training a computer vision or deep learning model, it's common practice to use 3 separate datasets: train, validation, & test.

Train datasets are composed of the data that's actually used to train a model. This is the data you want the model to see and learn from.

Validation datasets are used to see how your model is doing while training. Usually after a certain number of epochs (data cycles), the model is run on the validation dataset and returns an accuracy/loss score. Since it's never seen this data before, seeing accuracy going up and loss going down is a good indicator that your model is learning correct patterns and will generalize well to new data it's never seen. You can adjust hyperparameters based on the output of the model on this data. A hyperparameter is a parameter whose value is used to control the learning process.

Test datasets are a holdout dataset that should only be used once you have completely trained a model and want to verify that it works on data it's never seen. This is different than the validation dataset because you should not tweak hyperparmeters to try and fit it to the test data.

Read more about dataset splits here and here.