Overview Object Detection Node

Overview Object Detection Node

The Object Detection Node is used to detect several different objects off the shelf with pre-trained or custom deep learning models on GPU, CPU, VPU (Intel Movidius Myriad X) or TPU (Google Coral).


Input and Output
  1. Input: Frame from a video file, IP or USB camera.
  2. Output: MQTT message containing the detection results.
  3. Supported architecture: Currently supported on amd64 devices.
Result Structure
Each detection has below JSON structure:
  1. {
  2. class_id:
  3. label:
  4. confidence:  
  5. rect [x, y, w, h]:
  6. roi_id:
  7. }
Node Sections
The Object Detection node consists of two main parts:
  1. Detection: Select your device type, preferred model and objects of interest or link your own custom deep learning model. Set model parameters such as detection and overlap threshold.
  2. General Settings: Set the colors of the detection output boxes and define if the output boxes should be displayed or not.
Node Parameters
The following parameters are used in the Object Detection node.

Name: Input the node name used in a specific flow.
  1. default: object-detection
  2. type: string
Device: Select the target detection mode to be used. Currently available are CPU, GPU, VPU, and TPU.
  1. default: CPU
  2. type: string
In case you select VPU as your target device, you will be displayed with an additional field where you can indicate how many VPU target devices you would like to run your models on, or run the edge inference on both CPU and VPU. To do that, simply enter MYRIAD,MYRIAD,CPU as an example. This means, that the inference will be performed on 2x VPUs and 1x CPU.

Model Name: Select one of the most popular public models for object detection or link a custom trained model.
  1. Available models: See here
  2. default: yolov3
  3. type: string
Detection Labels: Select the target object(s) to be detected. A single or multiple objects can be selected.
  1. default: person
  2. type: string
Custom Model: You can link your own custom model by adding the model URL. Please refer to this guide with detailed instructions.
Detection Score Threshold: Refers to the threshold value for confidence score filtering.
  1. default: 0.5
  2. range: (0.0~1.0)
Detection Overlap Threshold: Refers to the threshold value for overlap (non-maximum-suppression) filtering.
  1. default: 0.7
  2. range: [0.0, 1.0]
Detection Size Width / Height: Refers to the threshold values for size filtering.
  1. range: [0.0, 1.0]
  2. default:
    min_width: 0.01, max_width: 0.5
    min_height: 0.01, max_height: 0.5
Requests: Since the pipe-lining relies on the availability of the parallel slack, running multiple inference requests in parallel is essential. If you are using multiple VPUs, it is recommended to apply the same number of requests as the number target VPU devices.
  1. default: 1
  2. type: integer
Show Result: Defines if the detection output boxes should be shown or not.
  1. default: true
  2. type: boolean
If you are using object detection in combination with Object Tracking or Counting, then it is recommended to disable the "show result" functionality as it might have an impact on the tracking quality in your application.

Text Color: Refers to the color of the text in the boxes which are shown on the preview video stream.
  1. Default: [255, 255, 100]
Line Color: Refers to the color of the line for the boxes which are showing the detection results in the preview video stream.
  1. Default: [0, 0, 200]



    • Related Articles

    • Overview Object Counting Node

      The Object Counting Node is used to count one or multiple objects which are previously detected by the object detection node. Input and Output Input: MQTT message (results from object detection node) Output: MQTT message containing IN / OUT results ...
    • Overview Object Flow Node

      The Object Flow Node detects and tracks people from an input video stream to compose a heatmap and to calculate the average dwell time. Input and Output Input: Object Detection mqtt result message, ROI section definition, Object Counting result ...
    • Overview Object Tracking Node

      The Object Tracking Node is used to track several objects which were previously detected by the object detection node across several frames.  Input and Output Input: Output of object detection node. Output: MQTT message containing the tracking ...
    • Overview Object Segmentation Node

      The Object Segmentation Node is used to segment several different objects off the shelf with pre-trained or custom deep learning models on GPU, CPU, VPU (Myriad) and TPU. Input and Output Input: Frame from a video file, IP or USB camera. Output: MQTT ...
    • Overview Fall Detection Node

      The Fall Detection Node detects and tracks the movement of people to identify if a person is falling. Input and Output Input: Group Keypoint Detection output message and stream Output: Fall detection message and stream Supported architecture: ...