NGC | Catalog
CatalogModelsDashCamNet

DashCamNet

Logo for DashCamNet
Features
Description
4 class object detection network to detect cars in an image.
Publisher
NVIDIA
Latest Version
pruned_onnx_v1.0.4
Modified
April 2, 2024
Size
5.13 MB

DashCamNet Model Card

Model Overview

The model described in this card detects one or more physical objects from four categories within an image and returns a box around each object, as well as a category label for each object. The four categories of objects detected by this model are – car, persons, road signs and bicycles.

Model Architecture

The model is based on NVIDIA DetectNet_v2 detector with ResNet18 as a feature extractor. This architecture, also known as GridBox object detection, uses bounding-box regression on a uniform grid on the input image. Gridbox system divides an input image into a grid which predicts four normalized bounding-box parameters (xc, yc, w, h) and confidence value per output class. The raw normalized bounding-box and confidence detections needs to be post-processed by a clustering algorithm such as DBSCAN or NMS to produce final bounding-box coordinates and category labels.

Training Algorithm

This model was trained using the DetectNet_v2 entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training is carried out in two phases. In the first phase, the network is trained with regularization to facilitate pruning. Following the first phase, we prune the network removing channels whose kernel norms are below the pruning threshold. In the second phase the pruned network is retrained. Regularization is not included during the second phase.

Training Data

DashCamNet v1.0 model was trained on a proprietary dataset with more than 2 million objects for car class. Most of the training dataset was collected and labelled in-house from images from a variety of dashcams and a small seed dataset containing images from traffic cameras in a city in the US. This content was chosen to improve accuracy of the object detection for images from a dashcam in a moving car.

Object Distribution
Environment Images Cars Persons Road Signs Two-Wheelers
Dashcam (5ft height) 128,000 1.7M 720,000 354,127 54,000
Traffic signal content 50,000 1.1M 53500 184000 11000
Total 178,000 2.8M 773,500 538,127 65,000
Training Data Ground-truth Labeling Guidelines
  • All objects that fall under one of the four classes (car, person two-wheeler, road_sign) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.
  • Occlusion: For partially occluded objects that are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.
  • Truncation: For an object that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.
  • Each frame is not required to have an object.

Performance

Evaluation Data

Dataset

The inference performance of DashCamNet v1.0 model was measured against 19,000 proprietary images across a variety of environments. The frames are high resolution images 1920x1080 pixels resized to 960x544 pixels before passing to the DashcamNet detection model.

Methodology and KPI

The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.

The intended use of this model is to detect cars from a moving vehicle. With that in mind the model evaluation is conducted and key performance indicators (KPI) are calculated for car class only. The other classes - road signs, two-wheelers and persons are not factored in the model evaluation.

Model DashcamNet
Content Precision Recall Accuracy
Dashcam 83.65 88.45 80

Real-time Inference Performance

The inference is run on a pruned model at INT8 precision. On the Jetson Nano, FP16 precision is used. The inference is run on Jetson Nano, AGX Xavier and NVIDIA T4 GPU. The Jetson Nano and AGX Xavier are running at Max-N configuration for maximum GPU frequency. The inference is measured at batch size equal to 1 to get the lowest inference latency and batch size equal N for maximum real-time inference, where N is the maximum batch size that keeps inference time per batch at under 33ms or 30fps.

alt text

How to use this model

This model needs to be used with NVIDIA Hardware and Software. For Hardware, the model can run on any NVIDIA GPU including NVIDIA Jetson devices. This model can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

There are two flavors of the model:

  • unpruned
  • pruned

The unpruned model is intended for training using TAO Toolkit and the user's own dataset. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

The pruned model is intended for efficient deployment on the edge using DeepStream SDK or TensorRT. This model accepts 960x544x3 dimension input tensors and outputs 60x34x16 bbox coordinate tensor and 60x34x4 class confidence tensor. DeepStream provides a toolkit to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference. DeepStream will then post-process the output bbox coordinate tensor and class confidence tensors with NMS or DBScan clustering algorithm to create appropriate bounding boxes. The sample application and config file to run this model are provided in DeepStream SDK.

The unpruned andpruned models are encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Input

RGB Image of dimensions: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None

Output

Category labels (people) and bounding-box coordinates for each detected people in the input image.

Input image
Output image

Instructions to use unpruned model with TAO

In order, to use this model as a pretrained weights for transfer learning, please use the below mentioned snippet as template for the model_config component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

model_config {
  num_layers: 18
  pretrained_model_file: "/path/to/the/model.tlt"
  use_batch_norm: true
  objective_set {
    bbox {
      scale: 35.0
      offset: 0.5
    }
    cov {
    }
  }
  training_precision {
    backend_floatx: FLOAT32
  }
  arch: "resnet"
  all_projections: true
}

Instructions to deploy this model with DeepStream

To create the entire end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The pruned model included here can be integrated directly into deepstream by following the instructions mentioned below.

  1. Run the default deepstream-app included in the DeepStream docker, by simply executing the commands below.

    ## Download Model:
    
    mkdir -p $HOME/dashcamnet && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/dashcamnet/versions/pruned_v1.0/files/resnet18_dashcamnet_pruned.etlt \
    -O $HOME/dashcamnet/resnet18_dashcamnet_pruned.etlt && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/dashcamnet/versions/pruned_v1.0/files/dashcamnet_int8.txt \
    -O $HOME/dashcamnet/dashcamnet_int8.txt
    mkdir -p $HOME/vehiclemakenet && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/vehiclemakenet/versions/pruned_v1.0/files/resnet18_vehiclemakenet_pruned.etlt \
    -O $HOME/vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/vehiclemakenet/versions/pruned_v1.0/files/vehiclemakenet_int8.txt \
    -O $HOME/vehiclemakenet/vehiclemakenet_int8.txt
    mkdir -p $HOME/vehicletypenet && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/vehicletypenet/versions/pruned_v1.0/files/resnet18_vehicletypenet_pruned.etlt \
    -O $HOME/vehicletypenet/resnet18_vehicletypenet_pruned.etlt && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/vehicletypenet/versions/pruned_v1.0/files/vehicletypenet_int8.txt \
    -O $HOME/vehicletypenet/vehicletypenet_int8.txt
    
    ## Run Application
    
    xhost +
    sudo docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models \
    -w /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.1-21.02-samples \
    deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
    
  2. Install deepstream on your local host and run the deepstream-app.

    Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

    /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models
    

    /opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

    You will need 2 config files and 1 label file. These files are provided in the tlt_pretrained_models directory.

    deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt - Main config file for DeepStream app
    config_infer_primary_dashcamnet.txt - File to configure inference settings
    labels_dashcamnet.txt - Label file with 3 classes
    

    Note: The deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt configures 3 models: DashCamNet as primary detector, and VehicleMakeNet and VehicleTypeNet as secondary classifiers. The classification models are typically used after initial object detection. To disable the secondary classifiers, set the enable flag under secondar_gie* to 0

    [secondary-gie0]
    enable=0
    ...
    
    [secondary-gie1]
    enable=0
    

    Key Parameters in config_infer_primary_dashcamnet.txt

    tlt-model-key
    tlt-encoded-model
    labelfile-path
    int8-calib-file
    input-dims
    num-detected-classes
    

    Run deepstream-app:

    deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
    

    Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Limitations

Very Small Objects

NVIDIA DashcamNet v1.0 model was trained to detect objects larger than 20x20 pixels. Therefore it may not be able to detect objects that are smaller than 20x20 pixels.

Occluded Objects

When objects are occluded or truncated such that less than 40% of the object is visible, they may not be detected by the DashcamNet model. For car class objects, the model will detect occluded cars as long as ~40% of the car is visible.

Night-time, Monochrome or Infrared Camera Images

The DashcamNet model was trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.

Warped Images

The DashcamNet model was not trained on fish-eye lense cameras. Therefore, the model may not perform well for warped images. Person, Two-wheelers and Road signs Although persons, two-wheelers and road signs classes are included in the model, the accuracy of these classes will be much lower than car class. Some re-training will be required on these classes to improve accuracy.

Location*

This model is trained on images from traffic cameras and dash cameras in the US. The accuracy might be lower if run against images from other countries, especially if the country has a very different traffic pattern than the US. For these cases, it is recommended to re-train the unpruned model with TAO with your own dataset.

Model versions

  • unpruned_v1.0 - ResNet18 based pre-trained model.
  • pruned_v1.0 - ResNet18 deployment models. Contains common INT8 calibration cache for GPU and DLA.

References

Citations

  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016)
  • Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks, In: CVPR. (2014)
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR (2015)

Using TAO Pre-trained Models

Technical blogs

Suggested reading

License

License to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses

Ethical Considerations

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.