NGC | Catalog
CatalogResourcesClara Deploy DeepStream Batch Pipeline [Deprecated]

Clara Deploy DeepStream Batch Pipeline [Deprecated]

Logo for Clara Deploy DeepStream Batch Pipeline [Deprecated]
Description
Clara Deploy DeepStream Batch Pipeline
Publisher
NVIDIA
Latest Version
0.8.1-2108.1
Modified
April 4, 2023
Compressed Size
52.02 MB

Clara Deploy SDK is being consolidated into Clara Holoscan SDK

More info https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/collections/claradeploy

DeepStream Batch Pipeline

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Ansible page to install the Clara Deploy SDK.

Overview

The DeepStream Batch pipeline is one of the reference pipelines provided with Clara Deploy SDK.

The pipeline is bundled with an organ detection model running on top of DeepStream SDK which provides a reference application. It accepts a .mp4 file in H.264 format and performs the object detection finding stomach and intestines from the input video. The output of the pipeline is a rendered video (bounding boxes with labels are overlayed on top of the original video) in H.264 format (output.mp4) as well as the primary detector output in a modified KITTI metadata format (.txt files).

Pipeline Definition

The DeepStream Batch pipeline is defined in the Clara Deploy pipeline definition language. This pipeline consists of the following operator:

The following is the details of pipeline definition, with comments describing each operator's functions as well as input and output.

api-version: 0.4.0
name: deepstream-batch-pipeline
parameters:
  DS_CONFIG: configs/config.txt
  DS_INPUT:  # if empty, any .mp4 file in /input folder would be used.
  DS_OUTPUT: output.mp4

operators:
- name: deepstream
  description: DeepStream Operator
  container:
    image: clara/app-deepstream
    tag: latest
    command: ["/workspace/launch_deepstream.sh"]
  variables:
    DS_CONFIG: ${{DS_CONFIG}}
    DS_INPUT: ${{DS_INPUT}}
    DS_OUTPUT: ${{DS_OUTPUT}}
  requests:
    gpu: 1
    memory: 8192
  input:
  - path: /input/
  output:
  - path: /output/

Data Input

Input requires a folder containing the following folders/files:

.
├── configs               # <required>: folder for configuration (name depends on `${DS_CONFIG}`)
│   └── config.txt        # <required>: a configuration file (name depends on `${DS_CONFIG}`)
├── models                # <required>: folder for models/labels (used by the configuration file)
│   ├── calibration.bin   # calibration data needed for the model
│   ├── labels.txt        # label data
│   ├── model.etlt        # device-independent model file
│   └── model.engine      # device-specific model file
└── test_input.mp4        # <required>: input video file (.mp4 file in H.264 format)

The bundled model (app_ds_torso-model_v1.zip) includes configs and models folder so only input video file (.mp4) in H.264 format is needed in the input folder.

Note that the bundled input (app_ds_torso-input_v1.zip) includes a sample input video file and you can use it for the test.

Data Output

With the bundled model (app_ds_torso), its output is the rendered video (bounding boxes with labels are overlayed on top of the original video) in H.264 format (output.mp4) as well as the primary detector output in a modified KITTI metadata format (.txt files).

Performance Implications

The DeepStream App Operator is a wrapper of DeepStream SDK's reference application and the container image used by the operator doesn't include models/configurations for the application. Instead, model and configuration files are also uploaded with the input video file as input payload whenever triggering a job and it can cause performance degradation.

In addition to that, bundled TRT model (resnet18_detector.etlt_b4_fp16.engine) is a device-specific model and optimized from the device-independent model (resnet18_detector.etlt) on GV100 Volta GPU (32GB). If loading the device-specific model file is failed, the application would create the optimized-device-specific model from the device-independent model on runtime which can cause some delays on startup time.

To mitigate the performance degradation, please build a custom docker image from the source code by copying models/configs folder into the container and updating paths/image/tag.

clara pull pipeline clara_deepstream_batch_pipeline
cd clara_deepstream_batch_pipeline
# Unzip source code
unzip source.zip
# Unzip app_ds_torso-model_v1.zip and app_ds_torso-input_v1.zip into `input/app_ds_torso` folder
./download_input.sh
# Update Dockerfile to add configs/models folder to the container
echo "COPY ./input/app_ds_torso/configs /configs
COPY ./input/app_ds_torso/models /models" >> Dockerfile
# Convert '/input/configs/' to '/configs/'
sed -i -e 's#/input/configs/#/configs/#' input/app_ds_torso/configs/config.txt
# Convert '/input/models/' to '/models/'
sed -i -e 's#/input/models/#/models/#' input/app_ds_torso/configs/dslhs_nvinfer_config.txt
# Convert 'configs/config.txt' to '/configs/config.txt'
sed -i -e 's#configs/config.txt#/configs/config.txt#' deepstream-batch-pipeline.yaml
# Update the image used in the pipeline definition to 'clara/app_deepstream:latest'
sed -i -e 's#image: .*#image: clara/app_deepstream#' deepstream-batch-pipeline.yaml
sed -i -e 's#tag: .*#tag: latest#' deepstream-batch-pipeline.yaml
# Build local image: clara/app_deepstream:latest
./build.sh

# Now you can create/trigger pipeline using 'deepstream-batch-pipeline.yaml' ...

License

An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.

Suggested Reading

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum: (https://developer.nvidia.com/clara).

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum: (https://devtalk.nvidia.com/default/board/362/clara-sdk/).