YOLOv5 Object Detection
This program inference YOLOv5 model for object detection.
Cross-Compile YOLO Program for PC
Duo256M YOLOv5 code location: sample_yolov5.cpp
Compilation method
Refer to the previous section Introduction for compiling the sample program using the provided methods. After compilation is completed, the sample_yolov5 program we need will be generated in the sample/cvi_yolo/ directory.
Obtain cvimodel
You can download precompiled yolov5s INT8 symmetric quantized cvimodel models directly, or manually convert the models as described in Model Compilation.
Download Precompiled cvimodels
- Duo256M
# INT8 symmetric model
wget https://github.com/milkv-duo/cvitek-tdl-sdk-sg200x/raw/main/cvimodel/yolov5_cv181x_int8_sym.cvimodel
Model Compilation
Export yolov5s.onnx Model
- First, clone the YOLOv5 official repository. The repository link is: ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
git clone https://github.com/ultralytics/yolov5.git - Configure the working environment
cd yolov5
pip3 install -r requirements.txt
pip3 install onnx - Obtain the .pt format model for yolov5, for example, download the yolov5s model: yolov5s
wget https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt - Copy cvitek-tdl-sdk-sg200x/sample/yolo_export/yolov5_export.py to the yolov5 repository directory.
Use yolov5_export.py to replace the forward function, allowing the RISC-V to handle post-processing and export the model in onnx format.
Parameter explanation: --weights: Path to the PyTorch model --img-size: Input image size
python3 yolov5_export.py --weights ./yolov5s.pt --img-size 640 640
TPU-MLIR Model Conversion
Please refer to TPU-MLIR documentation to set up the TPU-MLIR working environment. For parameter details, see TPU-MLIR documentation.
The specific implementation steps are divided into three parts:
-
model_transform.py: Converts the onnx model to the mlir intermediate format model. onnx -> model_transform.py -> mlir -
run_calibration.py: Generates the int8 quantization calibration table. calibration_set -> run_calibration.py -> calibration_table -
model_deploy.py: Generates the cvimodel for TPU inference using mlir and the int8 quantization table. mlir + calibration_table ->model_deploy.py -> cvimodel