YOLOv5 Object Detection
This program inference YOLOv5 model for object detection.
Cross-Compile YOLO Program for PC
Duo256M YOLOv5 code location: sample_yolov5.cpp
Compilation method
Refer to the previous section Introduction for compiling the sample program using the provided methods. After compilation is completed, the sample_yolov5
program we need will be generated in the sample/cvi_yolo/
directory.
Obtain cvimodel
You can download precompiled yolov5s INT8 symmetric quantized cvimodel models directly, or manually convert the models as described in Model Compilation.
Download Precompiled cvimodels
- Duo256M
# INT8 symmetric model
wget https://github.com/milkv-duo/cvitek-tdl-sdk-sg200x/raw/main/cvimodel/yolov5_cv181x_int8_sym.cvimodel
Model Compilation
Export yolov5s.onnx Model
- First, clone the YOLOv5 official repository. The repository link is: ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
git clone https://github.com/ultralytics/yolov5.git
- Configure the working environment
cd yolov5
pip3 install -r requirements.txt
pip3 install onnx - Obtain the .pt format model for yolov5, for example, download the yolov5s model: yolov5s
wget https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt
- Copy cvitek-tdl-sdk-sg200x/sample/yolo_export/yolov5_export.py to the yolov5 repository directory.
Use yolov5_export.py to replace the forward function, allowing the RISC-V to handle post-processing and export the model in onnx format.
Parameter explanation: --weights: Path to the PyTorch model --img-size: Input image size
python3 yolov5_export.py --weights ./yolov5s.pt --img-size 640 640
TPU-MLIR Model Conversion
Please refer to TPU-MLIR documentation to set up the TPU-MLIR working environment. For parameter details, see TPU-MLIR documentation.
The specific implementation steps are divided into three parts:
-
model_transform.py
: Converts the onnx model to the mlir intermediate format model. onnx -> model_transform.py -> mlir -
run_calibration.py
: Generates the int8 quantization calibration table. calibration_set -> run_calibration.py -> calibration_table -
model_deploy.py
: Generates the cvimodel for TPU inference using mlir and the int8 quantization table. mlir + calibration_table ->model_deploy.py -> cvimodel
onnx to MLIR
model_transform.py \
--model_name yolov5s \
--model_def yolov5s.onnx \
--input_shapes [[1,3,640,640]] \
--mean 0.0,0.0,0.0 \
--scale 0.0039216,0.0039216,0.0039216 \
--keep_aspect_ratio \
--pixel_format rgb \
--test_input ../image/dog.jpg \
--test_result yolov5s_top_outputs.npz \
--mlir yolov5s.mlir
After converting to the mlir file, a yolov5s_in_f32.npz file will be generated, which is the model's input file.
MLIR to INT8 Model (Supports INT8 Quantization Only)
Before quantizing to INT8 model, run calibration.py to get the calibration table. Prepare around 100~1000 images, in this case, 100 images from the COCO2017 dataset are used for demonstration.
run_calibration.py yolov5s.mlir \
--dataset ../COCO2017 \
--input_num 100 \
-o yolov5s_cali_table
Then use the calibration table to generate the int8 symmetric cvimodel
model_deploy.py \
--mlir yolov5s.mlir \
--quant_input --quant_output \
--quantize INT8 \
--calibration_table yolov5s_cali_table \
--processor cv181x \
--test_input yolov5s_in_f32.npz \
--test_reference yolov5s_top_outputs.npz \
--tolerance 0.85,0.45 \
--model yolov5_cv181x_int8_sym.cvimodel