Building Custom Models

Pre-requisites

Custom models are supported in Sense Professional and Sense Enterprise plans

  • An available Sense Node connected to a minimum of one camera

  • The Sense Node protocol buffer file

  • gRPC installed

Overview

To build a custom model, you will need to integrate the Sense Node API. Working with the Sense Node relies heavily on use of its gRPC API. This API provides a containerized model with access to a Node's associated video streams and with the endpoints to push detections and metadata back into the Sense platform.

Each container is also provided with two environment variables upon deployment. ‘SENSE_NODE_ADDR’ - the address for the Node API. ‘SENSE_MODULE_KEY’ - a key to authenticate requests.

This guide will explore the inner workings of the Sense Node API, how to use this in development of your model and walk you through to deploying that model to your node in the Sense platform.

The Sense Node API

The Node API is made up of five functions allowing your code access to the needed data to analyze and the ability to respond with detections and various data points. The usage of each of these functions will be expanded throughout this section.

Configure(ConfigReq)

This function takes in a ConfigReq message to configure and manage the associated streams with this module. It will return an empty message.

The ConfigReq message is structured as follows:

message ConfigReq {
// Module key to identify this configuration.
string module_key = 1;
// Should this set of videos be saved to disk?
bool save_video = 2;
// Should this set of videos be streamed in memory?
bool stream_video = 3;
// Set to 0 for no backlog, otherwise hold at most this many frames (per camera) in memory where the
// oldest frames are overriden in the event that the consumer falls behind.
int32 stream_backlog = 4;
// Set to 0 for native FPS.
double fps = 5;
// Set to 0 for native bitrate.
int32 bitrate = 6;
// Ouput resolution of frames/saved video.
Resolution resolution = 7;
// Ouput image format for frames ONLY.
ImageFormat image_format = 8;
}

The Resolution and ImageFormat messages are further broken down as follows:

enum ImageFormat {
// PNG image format, this is the default.
PNG = 0;
JPEG = 1;
BITMAP = 2;
}
message Resolution {
int32 width = 1;
int32 height = 2;
}

Stream(StreamReq)

This function takes in a StreamReq message and returns a gRPC stream of video streams. Each video stream is made up of a batch of the number of frames indicated in the StreamReq message.

The StreamReq message structure is as follows:

message StreamReq {
// The key to authenticate this module request.
string module_key = 1;
// The number of frames to pull per stream in a single batch.
int32 num_frames = 2;
}

Detections(stream DetectionBatch)

This function handles ingestion of frame annotation data for a given batch of frames. This takes in a gRPC stream of DetectionBatch messages and returns an empty message. Each DetectionBatch message is structured as follows:

message DetectionBatch {
// The key to authenticate this module request.
string module_key = 1;
// The list of detections for this batch.
repeated Detection detections = 2;
}

Each Detection message is structured as follows:

message Detection {
// The device ID assoicated with this annotation.
string device_id = 1;
// The timecode of this annotation in the stream.
double timecode = 2;
// The timestamp of this annotation in the stream.
int64 timestamp = 3;
// The image bytes used to generate this annotation.
bytes image_data = 4;
// The actual labels that represent the annotation itself.
bytes labels = 5;
}

DataPoints(stream DataPointBatch)

This function handles ingestion of all non-frame annotation data for a given batch of frames. This takes in a gRPC stream of DataPointBatch messages and returns an empty message. Each DataPointBatch message is structured as follows:

message DataPointBatch {
// The key to authenticate this module request.
string module_key = 1;
// The list of data points for this batch.
repeated DataPoint data_points = 2;
}

And each DataPoint message is structured as follows:

message DataPoint {
// The device ID associated with this data point.
string device_id = 1;
// The timecode of this data point in the stream.
double timecode = 2;
// The timestamp of this data point in the stream.
int64 timestamp = 3;
// The arbitrary data that represent this data point.
bytes data = 4;
}

SingleDataPoint(DataPointBatch)

This function handles ingestion of a single non-frame annotation data point for a given frame. The function takes in a single DataPointBatch message and returns an empty message.

Each DataPointBatch message is structured as follows:

message DataPointBatch {
// The key to authenticate this module request.
string module_key = 1;
// The list of data points for this batch.
repeated DataPoint data_points = 2;
}

And each DataPoint message is structured as follows:

message DataPoint {
// The device ID associated with this data point.
string device_id = 1;
// The timecode of this data point in the stream.
double timecode = 2;
// The timestamp of this data point in the stream.
int64 timestamp = 3;
// The arbitrary data that represent this data point.
bytes data = 4;
}

Using the Node.proto File

If you do not have access to the node.proto file, please contact your Sixgill Professional Services representative or email [email protected]

The node.proto file can be easily compiled into an auto-generated class in C++, Java, Python, Go, Ruby, C#, and PHP. See here for details on how to use the protocol buffer file in these languages.

Following these quick start instructions for Python and gRPC, compile the protocol buffer file into an auto-generated class. After you have generated the gRPC code and have the resulting node_pb2.py and node_pb2_grpc.py files you may copy them to your working environment and proceed.

Container Code Structure

This section will provide an outline of the current best practice from Sixgill in structuring the code in your container to process the streaming video data through your model.

  • Import your needed packages as well as the auto-generated class from the protocol buffer file

  • Build a SenseModule class that when called will initialize the configuration options for ConfigReq message to the Sense Node API

    • Build a stream_loop function that pulls in data from the Sense Node, processes the data and returns detections

      • Connect to the Node and initialize an instance of the ApiStub class from the protocol buffer auto-generated class

      • Loop through the gRPC stream of video streams and process the batches of frames, responding with your detections and other annotations

Deploying a Container

This section will guide you through connecting your container in DockerHub to Sense and deploying it to cameras on a node.

Adding a Custom Model

  • In the "ML" tab, click "Add Model"

  • Under Model Type, select the "Custom Model" option.

  • Under Model Options, enter the Docker credentials for the container you will be using.

  • Under Model Info, enter a name for your model and optional description.

  • Click "Create." You will be taken to your model's overview page where you can manage versions of your model, build, and deploy to your cameras.

Adding a Version

In the "Versions" tab for your model, click "Add Version"

  • Enter an optional "Version Description"

  • Enter the Dockerhub URL of the image that contains the model. If this is a private container, it must be accessible using the Dockerhub credentials entered in "Add Model"

  • If this is a GPU-enabled model, make sure "Model Requires GPU Resources" is checked

  • Click "Save Version"

Deploying Your Model

  • In your model's "Deployment" tab, click "New Deployment"

  • Select the Model Version you want to deploy.

  • Optionally, use the "Pre-select Cameras from a previous deployment" drop-down if you wish to deploy to cameras that were already used in a previous version. This will automatically select the cameras used in that version. You will be able to add additional cameras in the next step.

  • Click "Continue to Select Cameras for Deployment"

  • Select the cameras you want to deploy your model to. If you want to run multiple versions of the model simultaneously on a camera, check the "Run this model version simultaneously with camera's current version" checkbox.

  • Click "Continue to Advanced Configuration". Set the labels you want to deploy and various stream settings.

  • Click "Continue to Deployment Check". This step checks the available resources on your node prior to deploying.

  • The Deployment Check will display an indicator of your node's resource status

    • A yellow status circle indicates that the node is low on resources.

    • A green status circle indicates that the node has enough resources for deployment.

  • Click "Save & Deploy" to deploy this version.

Once deployed, it may take a few minutes for the model to start up and begin detections.