yolo11 Target Detection
This test program will infer the yolo11 model to achieve target detection, and the results will only be output in the form of printing.
Download the precompiled cvimodel
git clone https://github.com/zwyzwm/yolo11-obiect-dection.git
Cross-compile YOLO program on PC
You can use the yolov8 program.
Duo256M yolov8 code location: sample_yolov8.cpp
Compilation method
Refer to the method in the previous chapter Introduction to compile the sample program
After the compilation is completed, the sample_yolov8
program we need will be generated in the sample/cvi_yolo/
directory
Model compilation
If you have already downloaded the yolov8 repository, git pull
is enough.
Export yolo11.onnx model
- Download the yolo11 official repository code, the address is as follows: https://github.com/ultralytics
git clone https://github.com/ultralytics
- Download the corresponding yolo11 model file
Take yolo11n as an example
After downloading, copy it to the ultralytics directory.
- Export the model of yolo11n.onnx
Download the latest version of anacanda, please refer to: https://docs.anaconda.com/miniconda/
Download Python version 3.8 or above, PyTorch version 2.0.1 or above, it is best to use the latest version
Activate (for example, Python 3.8, torch2.0.1):
conda create -n py3.8 python==3.8.2
conda activate py3.8
python -m venv .venv
source .venv/bin/activate
pip3 install --upgrade pip
pip3 install torch==2.0.1
Note: Execute in the ultralytics directory.
Copy the yolo_export/yolov8_export.py code to the yolo11 repository, yolo11 is compatible with yolov8.
python3 yolov8_export.py --weights ./yolo11n.pt --img-size 640 640
Tip: When running this command, if an error similar to ModuleNotFoundError: No module named 'x'
appears, just pip install x
.
Generate yolo11n.onnx in the current directory.
Parameter explanation
--weights pytorch model path
--img-size image input size
TPU-MLIR conversion model
Please refer to the TPU-MLIR document to configure the TPU-MLIR working environment. For parameter analysis, please refer to the TPU-MLIR document.
After configuring the working environment, create a model_yolo11n directory in the same directory as this project and put the model and image files into it.
mkdir model_yolo11n && cd model_yolo11n
cp ${REGRESSION_PATH}/yolo11n.onnx .
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
The operation is as follows:
The specific implementation steps are divided into three steps:
- model_transform.py converts the onnx model into the mlir intermediate format model
onnx -> model_transform.py -> mlir
- run_calibration.py generates int8 quantization calibration table
calibration_set -> run_calibration.py -> calibration_table
- model_deploy.py generates cvimodel for TPU inference with mlir and int8 quantization table
mlir + calibration_table ->model_deploy.py -> cvimodel