RKNN3588——YOLOv10的PT模型转RKNN模型

一:PT转ONNX

修改yolov10的源码

1. 修改head.py文件,在lass v10Detect(Detect)中的forward添加

        # 导出onnx增加y = []for i in range(self.nl):t1 = self.one2one_cv2[i](x[i])t2 = self.one2one_cv3[i](x[i])y.append(t1)y.append(t2)return y# 导出onnx增加

2. 修改exporter.py文件,新增支持导出rknn的onnx,直接全部复制替换。

# Ultralytics YOLO 🚀, AGPL-3.0 license
"""
Export a YOLOv8 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobitFormat                  | `format=argument`         | Model
---                     | ---                       | ---
PyTorch                 | -                         | yolov8n.pt
TorchScript             | `torchscript`             | yolov8n.torchscript
ONNX                    | `onnx`                    | yolov8n.onnx
OpenVINO                | `openvino`                | yolov8n_openvino_model/
TensorRT                | `engine`                  | yolov8n.engine
CoreML                  | `coreml`                  | yolov8n.mlpackage
TensorFlow SavedModel   | `saved_model`             | yolov8n_saved_model/
TensorFlow GraphDef     | `pb`                      | yolov8n.pb
TensorFlow Lite         | `tflite`                  | yolov8n.tflite
TensorFlow Edge TPU     | `edgetpu`                 | yolov8n_edgetpu.tflite
TensorFlow.js           | `tfjs`                    | yolov8n_web_model/
PaddlePaddle            | `paddle`                  | yolov8n_paddle_model/
ncnn                    | `ncnn`                    | yolov8n_ncnn_model/Requirements:$ pip install "ultralytics[export]"Python:from ultralytics import YOLOmodel = YOLO('yolov8n.pt')results = model.export(format='onnx')CLI:$ yolo mode=export model=yolov8n.pt format=onnxInference:$ yolo predict model=yolov8n.pt                 # PyTorchyolov8n.torchscript        # TorchScriptyolov8n.onnx               # ONNX Runtime or OpenCV DNN with dnn=Trueyolov8n_openvino_model     # OpenVINOyolov8n.engine             # TensorRTyolov8n.mlpackage          # CoreML (macOS-only)yolov8n_saved_model        # TensorFlow SavedModelyolov8n.pb                 # TensorFlow GraphDefyolov8n.tflite             # TensorFlow Liteyolov8n_edgetpu.tflite     # TensorFlow Edge TPUyolov8n_paddle_model       # PaddlePaddleTensorFlow.js:$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example$ npm install$ ln -s ../../yolov5/yolov8n_web_model public/yolov8n_web_model$ npm start
"""
import json
import os
import shutil
import subprocess
import time
import warnings
from copy import deepcopy
from datetime import datetime
from pathlib import Path
import cv2import torchfrom ultralytics.cfg import get_cfg
from ultralytics.nn.autobackend import check_class_names
from ultralytics.nn.modules import C2f, Detect, RTDETRDecoder
from ultralytics.nn.tasks import DetectionModel, SegmentationModel
from ultralytics.utils import (ARM64, DEFAULT_CFG, LINUX, LOGGER, MACOS, ROOT, WINDOWS, __version__, callbacks,colorstr, get_default_args, yaml_save)
from ultralytics.utils.checks import check_imgsz, check_requirements, check_version
from ultralytics.utils.downloads import attempt_download_asset, get_github_assets
from ultralytics.utils.files import file_size, spaces_in_path
from ultralytics.utils.ops import Profile
from ultralytics.utils.torch_utils import get_latest_opset, select_device, smart_inference_modedef export_formats():"""YOLOv8 export formats."""import pandasx = [['PyTorch', '-', '.pt', True, True],['TorchScript', 'torchscript', '.torchscript', True, True],['ONNX', 'onnx', '.onnx', True, True],['OpenVINO', 'openvino', '_openvino_model', True, False],['TensorRT', 'engine', '.engine', False, True],['CoreML', 'coreml', '.mlpackage', True, False],['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],['TensorFlow GraphDef', 'pb', '.pb', True, True],['TensorFlow Lite', 'tflite', '.tflite', True, False],['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', True, False],['TensorFlow.js', 'tfjs', '_web_model', True, False],['PaddlePaddle', 'paddle', '_paddle_model', True, True], ['ncnn', 'ncnn', '_ncnn_model', True, True],['RKNN', 'rknn', '_rknnopt.torchscript', True, False],]return pandas.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])def gd_outputs(gd):"""TensorFlow GraphDef model output node names."""name_list, input_list = [], []for node in gd.node:  # tensorflow.core.framework.node_def_pb2.NodeDefname_list.append(node.name)input_list.extend(node.input)return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp'))def try_export(inner_func):"""YOLOv8 export decorator, i..e @try_export."""inner_args = get_default_args(inner_func)def outer_func(*args, **kwargs):"""Export a model."""prefix = inner_args['prefix']try:with Profile() as dt:f, model = inner_func(*args, **kwargs)LOGGER.info(f"{prefix} export success ✅ {dt.t:.1f}s, saved as '{f}' ({file_size(f):.1f} MB)")return f, modelexcept Exception as e:LOGGER.info(f'{prefix} export failure ❌ {dt.t:.1f}s: {e}')raise ereturn outer_funcclass Exporter:"""A class for exporting a model.Attributes:args (SimpleNamespace): Configuration for the exporter.save_dir (Path): Directory to save results."""def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):"""Initializes the Exporter class.Args:cfg (str, optional): Path to a configuration file. Defaults to DEFAULT_CFG.overrides (dict, optional): Configuration overrides. Defaults to None._callbacks (list, optional): List of callback functions. Defaults to None."""self.args = get_cfg(cfg, overrides)self.callbacks = _callbacks or callbacks.get_default_callbacks()callbacks.add_integration_callbacks(self)@smart_inference_mode()def __call__(self, model=None):"""Returns list of exported files/dirs after running callbacks."""self.run_callbacks('on_export_start')t = time.time()format = self.args.format.lower()  # to lowercaseif format in ('tensorrt', 'trt'):  # 'engine' aliasesformat = 'engine'if format in ('mlmodel', 'mlpackage', 'mlprogram', 'apple', 'ios'):  # 'coreml' aliasesformat = 'coreml'fmts = tuple(export_formats()['Argument'][1:])  # available export formatsflags = [x == format for x in fmts]if sum(flags) != 1:raise ValueError(f"Invalid export format='{format}'. Valid formats are {fmts}")jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, ncnn, rknn = flags  # export booleans# Load PyTorch modelself.device = select_device('cpu' if self.args.device is None else self.args.device)# Checksmodel.names = check_class_names(model.names)if self.args.half and onnx and self.device.type == 'cpu':LOGGER.warning('WARNING ⚠️ half=True only compatible with GPU export, i.e. use device=0')self.args.half = Falseassert not self.args.dynamic, 'half=True not compatible with dynamic=True, i.e. use only one.'self.imgsz = check_imgsz(self.args.imgsz, stride=model.stride, min_dim=2)  # check image sizeif self.args.optimize:assert not ncnn, "optimize=True not compatible with format='ncnn', i.e. use optimize=False"assert self.device.type == 'cpu', "optimize=True not compatible with cuda devices, i.e. use device='cpu'"if edgetpu and not LINUX:raise SystemError('Edge TPU export only supported on Linux. See https://coral.ai/docs/edgetpu/compiler/')# Inputim = torch.zeros(self.args.batch, 3, *self.imgsz).to(self.device)file = Path(getattr(model, 'pt_path', None) or getattr(model, 'yaml_file', None) or model.yaml.get('yaml_file', ''))if file.suffix in ('.yaml', '.yml'):file = Path(file.name)# Update modelmodel = deepcopy(model).to(self.device)for p in model.parameters():p.requires_grad = Falsemodel.eval()model.float()model = model.fuse()for k, m in model.named_modules():if isinstance(m, (Detect, RTDETRDecoder)):  # Segment and Pose use Detect base classm.dynamic = self.args.dynamicm.export = Truem.format = self.args.formatelif isinstance(m, C2f) and not any((saved_model, pb, tflite, edgetpu, tfjs)):# EdgeTPU does not support FlexSplitV while split provides cleaner ONNX graphm.forward = m.forward_splity = Nonefor _ in range(2):y = model(im)  # dry runsif self.args.half and (engine or onnx) and self.device.type != 'cpu':im, model = im.half(), model.half()  # to FP16# Filter warningswarnings.filterwarnings('ignore', category=torch.jit.TracerWarning)  # suppress TracerWarningwarnings.filterwarnings('ignore', category=UserWarning)  # suppress shape prim::Constant missing ONNX warningwarnings.filterwarnings('ignore', category=DeprecationWarning)  # suppress CoreML np.bool deprecation warning# Assignself.im = imself.model = modelself.file = fileself.output_shape = tuple(y.shape) if isinstance(y, torch.Tensor) else \tuple(tuple(x.shape if isinstance(x, torch.Tensor) else []) for x in y)self.pretty_name = Path(self.model.yaml.get('yaml_file', self.file)).stem.replace('yolo', 'YOLO')data = model.args['data'] if hasattr(model, 'args') and isinstance(model.args, dict) else ''description = f'Ultralytics {self.pretty_name} model {f"trained on {data}" if data else ""}'self.metadata = {'description': description,'author': 'Ultralytics','license': 'AGPL-3.0 https://ultralytics.com/license','date': datetime.now().isoformat(),'version': __version__,'stride': int(max(model.stride)),'task': model.task,'batch': self.args.batch,'imgsz': self.imgsz,'names': model.names}  # model metadataif model.task == 'pose':self.metadata['kpt_shape'] = model.model[-1].kpt_shapeLOGGER.info(f"\n{colorstr('PyTorch:')} starting from '{file}' with input shape {tuple(im.shape)} BCHW and "f'output shape(s) {self.output_shape} ({file_size(file):.1f} MB)')# Exportsf = [''] * len(fmts)  # exported filenamesif jit or ncnn:  # TorchScriptf[0], _ = self.export_torchscript()if engine:  # TensorRT required before ONNXf[1], _ = self.export_engine()if onnx or xml:  # OpenVINO requires ONNXf[2], _ = self.export_onnx()if xml:  # OpenVINOf[3], _ = self.export_openvino()if coreml:  # CoreMLf[4], _ = self.export_coreml()if any((saved_model, pb, tflite, edgetpu, tfjs)):  # TensorFlow formatsself.args.int8 |= edgetpuf[5], s_model = self.export_saved_model()if pb or tfjs:  # pb prerequisite to tfjsf[6], _ = self.export_pb(s_model)if tflite:f[7], _ = self.export_tflite(s_model, nms=False, agnostic_nms=self.args.agnostic_nms)if edgetpu:f[8], _ = self.export_edgetpu(tflite_model=Path(f[5]) / f'{self.file.stem}_full_integer_quant.tflite')if tfjs:f[9], _ = self.export_tfjs()if paddle:  # PaddlePaddlef[10], _ = self.export_paddle()if ncnn:  # ncnnf[11], _ = self.export_ncnn()if rknn:f[12], _ = self.export_rknn()# Finishf = [str(x) for x in f if x]  # filter out '' and Noneif any(f):f = str(Path(f[-1]))square = self.imgsz[0] == self.imgsz[1]s = '' if square else f"WARNING ⚠️ non-PyTorch val requires square images, 'imgsz={self.imgsz}' will not " \f"work. Use export 'imgsz={max(self.imgsz)}' if val is required."imgsz = self.imgsz[0] if square else str(self.imgsz)[1:-1].replace(' ', '')predict_data = f'data={data}' if model.task == 'segment' and format == 'pb' else ''LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)'f"\nResults saved to {colorstr('bold', file.parent.resolve())}"f'\nPredict:         yolo predict task={model.task} model={f} imgsz={imgsz} {predict_data}'f'\nValidate:        yolo val task={model.task} model={f} imgsz={imgsz} data={data} {s}'f'\nVisualize:       https://netron.app')self.run_callbacks('on_export_end')return f  # return list of exported files/dirs@try_exportdef export_torchscript(self, prefix=colorstr('TorchScript:')):"""YOLOv8 TorchScript model export."""LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')f = self.file.with_suffix('.torchscript')ts = torch.jit.trace(self.model, self.im, strict=False)extra_files = {'config.txt': json.dumps(self.metadata)}  # torch._C.ExtraFilesMap()if self.args.optimize:  # https://pytorch.org/tutorials/recipes/mobile_interpreter.htmlLOGGER.info(f'{prefix} optimizing for mobile...')from torch.utils.mobile_optimizer import optimize_for_mobileoptimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)else:ts.save(str(f), _extra_files=extra_files)return f, None@try_exportdef export_rknn(self, prefix=colorstr('RKNN:')):"""YOLOv8 RKNN model export."""LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')# ts = torch.jit.trace(self.model, self.im, strict=False)# f = str(self.file).replace(self.file.suffix, f'_rknnopt.torchscript')# torch.jit.save(ts, str(f))f = str(self.file).replace(self.file.suffix, f'.onnx')opset_version = self.args.opset or get_latest_opset()torch.onnx.export(self.model,self.im[0:1,:,:,:],f,verbose=False,opset_version=12,do_constant_folding=True,  # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=Falseinput_names=['images'])LOGGER.info(f'\n{prefix} feed {f} to RKNN-Toolkit or RKNN-Toolkit2 to generate RKNN model.\n' 'Refer https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo')return f, None@try_exportdef export_onnx(self, prefix=colorstr('ONNX:')):"""YOLOv8 ONNX export."""requirements = ['onnx>=1.12.0']if self.args.simplify:requirements += ['onnxsim>=0.4.33', 'onnxruntime-gpu' if torch.cuda.is_available() else 'onnxruntime']check_requirements(requirements)import onnx  # noqaopset_version = self.args.opset or get_latest_opset()LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__} opset {opset_version}...')f = str(self.file.with_suffix('.onnx'))output_names = ['output0', 'output1'] if isinstance(self.model, SegmentationModel) else ['output0']dynamic = self.args.dynamicif dynamic:dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}}  # shape(1,3,640,640)if isinstance(self.model, SegmentationModel):dynamic['output0'] = {0: 'batch', 2: 'anchors'}  # shape(1, 116, 8400)dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'}  # shape(1,32,160,160)elif isinstance(self.model, DetectionModel):dynamic['output0'] = {0: 'batch', 2: 'anchors'}  # shape(1, 84, 8400)torch.onnx.export(self.model.cpu() if dynamic else self.model,  # dynamic=True only compatible with cpuself.im.cpu() if dynamic else self.im,f,verbose=False,opset_version=opset_version,do_constant_folding=True,  # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=Falseinput_names=['images'],output_names=output_names,dynamic_axes=dynamic or None)# Checksmodel_onnx = onnx.load(f)  # load onnx model# onnx.checker.check_model(model_onnx)  # check onnx model# Simplifyif self.args.simplify:try:import onnxsimLOGGER.info(f'{prefix} simplifying with onnxsim {onnxsim.__version__}...')# subprocess.run(f'onnxsim "{f}" "{f}"', shell=True)model_onnx, check = onnxsim.simplify(model_onnx)assert check, 'Simplified ONNX model could not be validated'except Exception as e:LOGGER.info(f'{prefix} simplifier failure: {e}')# Metadatafor k, v in self.metadata.items():meta = model_onnx.metadata_props.add()meta.key, meta.value = k, str(v)onnx.save(model_onnx, f)return f, model_onnx@try_exportdef export_openvino(self, prefix=colorstr('OpenVINO:')):"""YOLOv8 OpenVINO export."""check_requirements('openvino-dev>=2023.0')  # requires openvino-dev: https://pypi.org/project/openvino-dev/import openvino.runtime as ov  # noqafrom openvino.tools import mo  # noqaLOGGER.info(f'\n{prefix} starting export with openvino {ov.__version__}...')f = str(self.file).replace(self.file.suffix, f'_openvino_model{os.sep}')f_onnx = self.file.with_suffix('.onnx')f_ov = str(Path(f) / self.file.with_suffix('.xml').name)ov_model = mo.convert_model(f_onnx,model_name=self.pretty_name,framework='onnx',compress_to_fp16=self.args.half)  # export# Set RT infoov_model.set_rt_info('YOLOv8', ['model_info', 'model_type'])ov_model.set_rt_info(True, ['model_info', 'reverse_input_channels'])ov_model.set_rt_info(114, ['model_info', 'pad_value'])ov_model.set_rt_info([255.0], ['model_info', 'scale_values'])ov_model.set_rt_info(self.args.iou, ['model_info', 'iou_threshold'])ov_model.set_rt_info([v.replace(' ', '_') for k, v in sorted(self.model.names.items())],['model_info', 'labels'])if self.model.task != 'classify':ov_model.set_rt_info('fit_to_window_letterbox', ['model_info', 'resize_type'])ov.serialize(ov_model, f_ov)  # saveyaml_save(Path(f) / 'metadata.yaml', self.metadata)  # add metadata.yamlreturn f, None@try_exportdef export_paddle(self, prefix=colorstr('PaddlePaddle:')):"""YOLOv8 Paddle export."""check_requirements(('paddlepaddle', 'x2paddle'))import x2paddle  # noqafrom x2paddle.convert import pytorch2paddle  # noqaLOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...')f = str(self.file).replace(self.file.suffix, f'_paddle_model{os.sep}')pytorch2paddle(module=self.model, save_dir=f, jit_type='trace', input_examples=[self.im])  # exportyaml_save(Path(f) / 'metadata.yaml', self.metadata)  # add metadata.yamlreturn f, None@try_exportdef export_ncnn(self, prefix=colorstr('ncnn:')):"""YOLOv8 ncnn export using PNNX https://github.com/pnnx/pnnx."""check_requirements('git+https://github.com/Tencent/ncnn.git' if ARM64 else 'ncnn')  # requires ncnnimport ncnn  # noqaLOGGER.info(f'\n{prefix} starting export with ncnn {ncnn.__version__}...')f = Path(str(self.file).replace(self.file.suffix, f'_ncnn_model{os.sep}'))f_ts = self.file.with_suffix('.torchscript')pnnx_filename = 'pnnx.exe' if WINDOWS else 'pnnx'if Path(pnnx_filename).is_file():pnnx = pnnx_filenameelif (ROOT / pnnx_filename).is_file():pnnx = ROOT / pnnx_filenameelse:LOGGER.warning(f'{prefix} WARNING ⚠️ PNNX not found. Attempting to download binary file from ''https://github.com/pnnx/pnnx/.\nNote PNNX Binary file must be placed in current working directory 'f'or in {ROOT}. See PNNX repo for full installation instructions.')_, assets = get_github_assets(repo='pnnx/pnnx', retry=True)asset = [x for x in assets if ('macos' if MACOS else 'ubuntu' if LINUX else 'windows') in x][0]attempt_download_asset(asset, repo='pnnx/pnnx', release='latest')unzip_dir = Path(asset).with_suffix('')pnnx = ROOT / pnnx_filename  # new location(unzip_dir / pnnx_filename).rename(pnnx)  # move binary to ROOTshutil.rmtree(unzip_dir)  # delete unzip dirPath(asset).unlink()  # delete zippnnx.chmod(0o777)  # set read, write, and execute permissions for everyoneuse_ncnn = Truencnn_args = [f'ncnnparam={f / "model.ncnn.param"}',f'ncnnbin={f / "model.ncnn.bin"}',f'ncnnpy={f / "model_ncnn.py"}', ] if use_ncnn else []use_pnnx = Falsepnnx_args = [f'pnnxparam={f / "model.pnnx.param"}',f'pnnxbin={f / "model.pnnx.bin"}',f'pnnxpy={f / "model_pnnx.py"}',f'pnnxonnx={f / "model.pnnx.onnx"}', ] if use_pnnx else []cmd = [str(pnnx),str(f_ts),*ncnn_args,*pnnx_args,f'fp16={int(self.args.half)}',f'device={self.device.type}',f'inputshape="{[self.args.batch, 3, *self.imgsz]}"', ]f.mkdir(exist_ok=True)  # make ncnn_model directoryLOGGER.info(f"{prefix} running '{' '.join(cmd)}'")subprocess.run(cmd, check=True)for f_debug in 'debug.bin', 'debug.param', 'debug2.bin', 'debug2.param':  # remove debug filesPath(f_debug).unlink(missing_ok=True)yaml_save(f / 'metadata.yaml', self.metadata)  # add metadata.yamlreturn str(f), Nonedef export_coreml(self, prefix=colorstr('CoreML:')):"""YOLOv8 CoreML export."""mlmodel = self.args.format.lower() == 'mlmodel'  # legacy *.mlmodel export format requestedcheck_requirements('coremltools>=6.0,<=6.2' if mlmodel else 'coremltools>=7.0.b1')import coremltools as ct  # noqaLOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')f = self.file.with_suffix('.mlmodel' if mlmodel else '.mlpackage')if f.is_dir():shutil.rmtree(f)bias = [0.0, 0.0, 0.0]scale = 1 / 255classifier_config = Noneif self.model.task == 'classify':classifier_config = ct.ClassifierConfig(list(self.model.names.values())) if self.args.nms else Nonemodel = self.modelelif self.model.task == 'detect':model = iOSDetectModel(self.model, self.im) if self.args.nms else self.modelelse:if self.args.nms:LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolov8n.pt'.")# TODO CoreML Segment and Pose model pipeliningmodel = self.modelts = torch.jit.trace(model.eval(), self.im, strict=False)  # TorchScript modelct_model = ct.convert(ts,inputs=[ct.ImageType('image', shape=self.im.shape, scale=scale, bias=bias)],classifier_config=classifier_config,convert_to='neuralnetwork' if mlmodel else 'mlprogram')bits, mode = (8, 'kmeans') if self.args.int8 else (16, 'linear') if self.args.half else (32, None)if bits < 32:if 'kmeans' in mode:check_requirements('scikit-learn')  # scikit-learn package required for k-means quantizationif mlmodel:ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)else:import coremltools.optimize.coreml as ctoop_config = cto.OpPalettizerConfig(mode=mode, nbits=bits, weight_threshold=512)config = cto.OptimizationConfig(global_config=op_config)ct_model = cto.palettize_weights(ct_model, config=config)if self.args.nms and self.model.task == 'detect':if mlmodel:import platform# coremltools<=6.2 NMS export requires Python<3.11check_version(platform.python_version(), '<3.11', name='Python ', hard=True)weights_dir = Noneelse:ct_model.save(str(f))  # save otherwise weights_dir does not existweights_dir = str(f / 'Data/com.apple.CoreML/weights')ct_model = self._pipeline_coreml(ct_model, weights_dir=weights_dir)m = self.metadata  # metadata dictct_model.short_description = m.pop('description')ct_model.author = m.pop('author')ct_model.license = m.pop('license')ct_model.version = m.pop('version')ct_model.user_defined_metadata.update({k: str(v) for k, v in m.items()})try:ct_model.save(str(f))  # save *.mlpackageexcept Exception as e:LOGGER.warning(f'{prefix} WARNING ⚠️ CoreML export to *.mlpackage failed ({e}), reverting to *.mlmodel export. 'f'Known coremltools Python 3.11 and Windows bugs https://github.com/apple/coremltools/issues/1928.')f = f.with_suffix('.mlmodel')ct_model.save(str(f))return f, ct_model@try_exportdef export_engine(self, prefix=colorstr('TensorRT:')):"""YOLOv8 TensorRT export https://developer.nvidia.com/tensorrt."""assert self.im.device.type != 'cpu', "export running on CPU but must be on GPU, i.e. use 'device=0'"try:import tensorrt as trt  # noqaexcept ImportError:if LINUX:check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com')import tensorrt as trt  # noqacheck_version(trt.__version__, '7.0.0', hard=True)  # require tensorrt>=7.0.0self.args.simplify = Truef_onnx, _ = self.export_onnx()LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')assert Path(f_onnx).exists(), f'failed to export ONNX file: {f_onnx}'f = self.file.with_suffix('.engine')  # TensorRT engine filelogger = trt.Logger(trt.Logger.INFO)if self.args.verbose:logger.min_severity = trt.Logger.Severity.VERBOSEbuilder = trt.Builder(logger)config = builder.create_builder_config()config.max_workspace_size = self.args.workspace * 1 << 30# config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30)  # fix TRT 8.4 deprecation noticeflag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))network = builder.create_network(flag)parser = trt.OnnxParser(network, logger)if not parser.parse_from_file(f_onnx):raise RuntimeError(f'failed to load ONNX file: {f_onnx}')inputs = [network.get_input(i) for i in range(network.num_inputs)]outputs = [network.get_output(i) for i in range(network.num_outputs)]for inp in inputs:LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}')for out in outputs:LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}')if self.args.dynamic:shape = self.im.shapeif shape[0] <= 1:LOGGER.warning(f"{prefix} WARNING ⚠️ 'dynamic=True' model requires max batch size, i.e. 'batch=16'")profile = builder.create_optimization_profile()for inp in inputs:profile.set_shape(inp.name, (1, *shape[1:]), (max(1, shape[0] // 2), *shape[1:]), shape)config.add_optimization_profile(profile)LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and self.args.half else 32} engine as {f}')if builder.platform_has_fast_fp16 and self.args.half:config.set_flag(trt.BuilderFlag.FP16)# Write filewith builder.build_engine(network, config) as engine, open(f, 'wb') as t:# Metadatameta = json.dumps(self.metadata)t.write(len(meta).to_bytes(4, byteorder='little', signed=True))t.write(meta.encode())# Modelt.write(engine.serialize())return f, None@try_exportdef export_saved_model(self, prefix=colorstr('TensorFlow SavedModel:')):"""YOLOv8 TensorFlow SavedModel export."""cuda = torch.cuda.is_available()try:import tensorflow as tf  # noqaexcept ImportError:check_requirements(f"tensorflow{'-macos' if MACOS else '-aarch64' if ARM64 else '' if cuda else '-cpu'}")import tensorflow as tf  # noqacheck_requirements(('onnx', 'onnx2tf>=1.15.4', 'sng4onnx>=1.0.1', 'onnxsim>=0.4.33', 'onnx_graphsurgeon>=0.3.26','tflite_support', 'onnxruntime-gpu' if cuda else 'onnxruntime'),cmds='--extra-index-url https://pypi.ngc.nvidia.com')  # onnx_graphsurgeon only on NVIDIALOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')f = Path(str(self.file).replace(self.file.suffix, '_saved_model'))if f.is_dir():import shutilshutil.rmtree(f)  # delete output folder# Export to ONNXself.args.simplify = Truef_onnx, _ = self.export_onnx()# Export to TFtmp_file = f / 'tmp_tflite_int8_calibration_images.npy'  # int8 calibration images fileif self.args.int8:verbosity = '--verbosity info'if self.args.data:import numpy as npfrom ultralytics.data.dataset import YOLODatasetfrom ultralytics.data.utils import check_det_dataset# Generate calibration data for integer quantizationLOGGER.info(f"{prefix} collecting INT8 calibration images from 'data={self.args.data}'")data = check_det_dataset(self.args.data)dataset = YOLODataset(data['val'], data=data, imgsz=self.imgsz[0], augment=False)images = []n_images = 100  # maximum number of imagesfor n, batch in enumerate(dataset):if n >= n_images:breakim = batch['img'].permute(1, 2, 0)[None]  # list to nparray, CHW to BHWCimages.append(im)f.mkdir()images = torch.cat(images, 0).float()# mean = images.view(-1, 3).mean(0)  # imagenet mean [123.675, 116.28, 103.53]# std = images.view(-1, 3).std(0)  # imagenet std [58.395, 57.12, 57.375]np.save(str(tmp_file), images.numpy())  # BHWCint8 = f'-oiqt -qt per-tensor -cind images "{tmp_file}" "[[[[0, 0, 0]]]]" "[[[[255, 255, 255]]]]"'else:int8 = '-oiqt -qt per-tensor'else:verbosity = '--non_verbose'int8 = ''cmd = f'onnx2tf -i "{f_onnx}" -o "{f}" -nuo {verbosity} {int8}'.strip()LOGGER.info(f"{prefix} running '{cmd}'")subprocess.run(cmd, shell=True)yaml_save(f / 'metadata.yaml', self.metadata)  # add metadata.yaml# Remove/rename TFLite modelsif self.args.int8:tmp_file.unlink(missing_ok=True)for file in f.rglob('*_dynamic_range_quant.tflite'):file.rename(file.with_name(file.stem.replace('_dynamic_range_quant', '_int8') + file.suffix))for file in f.rglob('*_integer_quant_with_int16_act.tflite'):file.unlink()  # delete extra fp16 activation TFLite files# Add TFLite metadatafor file in f.rglob('*.tflite'):f.unlink() if 'quant_with_int16_act.tflite' in str(f) else self._add_tflite_metadata(file)# Load saved_modelkeras_model = tf.saved_model.load(f, tags=None, options=None)return str(f), keras_model@try_exportdef export_pb(self, keras_model, prefix=colorstr('TensorFlow GraphDef:')):"""YOLOv8 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow."""import tensorflow as tf  # noqafrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2  # noqaLOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')f = self.file.with_suffix('.pb')m = tf.function(lambda x: keras_model(x))  # full modelm = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))frozen_func = convert_variables_to_constants_v2(m)frozen_func.graph.as_graph_def()tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)return f, None@try_exportdef export_tflite(self, keras_model, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):"""YOLOv8 TensorFlow Lite export."""import tensorflow as tf  # noqaLOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')saved_model = Path(str(self.file).replace(self.file.suffix, '_saved_model'))if self.args.int8:f = saved_model / f'{self.file.stem}_int8.tflite'  # fp32 in/outelif self.args.half:f = saved_model / f'{self.file.stem}_float16.tflite'  # fp32 in/outelse:f = saved_model / f'{self.file.stem}_float32.tflite'return str(f), None@try_exportdef export_edgetpu(self, tflite_model='', prefix=colorstr('Edge TPU:')):"""YOLOv8 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/."""LOGGER.warning(f'{prefix} WARNING ⚠️ Edge TPU known bug https://github.com/ultralytics/ultralytics/issues/1185')cmd = 'edgetpu_compiler --version'help_url = 'https://coral.ai/docs/edgetpu/compiler/'assert LINUX, f'export only supported on Linux. See {help_url}'if subprocess.run(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True).returncode != 0:LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0  # sudo installed on systemfor c in ('curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -','echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list','sudo apt-get update', 'sudo apt-get install edgetpu-compiler'):subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True)ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...')f = str(tflite_model).replace('.tflite', '_edgetpu.tflite')  # Edge TPU modelcmd = f'edgetpu_compiler -s -d -k 10 --out_dir "{Path(f).parent}" "{tflite_model}"'LOGGER.info(f"{prefix} running '{cmd}'")subprocess.run(cmd, shell=True)self._add_tflite_metadata(f)return f, None@try_exportdef export_tfjs(self, prefix=colorstr('TensorFlow.js:')):"""YOLOv8 TensorFlow.js export."""check_requirements('tensorflowjs')import tensorflow as tfimport tensorflowjs as tfjs  # noqaLOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')f = str(self.file).replace(self.file.suffix, '_web_model')  # js dirf_pb = str(self.file.with_suffix('.pb'))  # *.pb pathgd = tf.Graph().as_graph_def()  # TF GraphDefwith open(f_pb, 'rb') as file:gd.ParseFromString(file.read())outputs = ','.join(gd_outputs(gd))LOGGER.info(f'\n{prefix} output node names: {outputs}')with spaces_in_path(f_pb) as fpb_, spaces_in_path(f) as f_:  # exporter can not handle spaces in pathcmd = f'tensorflowjs_converter --input_format=tf_frozen_model --output_node_names={outputs} "{fpb_}" "{f_}"'LOGGER.info(f"{prefix} running '{cmd}'")subprocess.run(cmd, shell=True)if ' ' in str(f):LOGGER.warning(f"{prefix} WARNING ⚠️ your model may not work correctly with spaces in path '{f}'.")# f_json = Path(f) / 'model.json'  # *.json path# with open(f_json, 'w') as j:  # sort JSON Identity_* in ascending order#     subst = re.sub(#         r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '#         r'"Identity.?.?": {"name": "Identity.?.?"}, '#         r'"Identity.?.?": {"name": "Identity.?.?"}, '#         r'"Identity.?.?": {"name": "Identity.?.?"}}}',#         r'{"outputs": {"Identity": {"name": "Identity"}, '#         r'"Identity_1": {"name": "Identity_1"}, '#         r'"Identity_2": {"name": "Identity_2"}, '#         r'"Identity_3": {"name": "Identity_3"}}}',#         f_json.read_text(),#     )#     j.write(subst)yaml_save(Path(f) / 'metadata.yaml', self.metadata)  # add metadata.yamlreturn f, Nonedef _add_tflite_metadata(self, file):"""Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata."""from tflite_support import flatbuffers  # noqafrom tflite_support import metadata as _metadata  # noqafrom tflite_support import metadata_schema_py_generated as _metadata_fb  # noqa# Create model infomodel_meta = _metadata_fb.ModelMetadataT()model_meta.name = self.metadata['description']model_meta.version = self.metadata['version']model_meta.author = self.metadata['author']model_meta.license = self.metadata['license']# Label filetmp_file = Path(file).parent / 'temp_meta.txt'with open(tmp_file, 'w') as f:f.write(str(self.metadata))label_file = _metadata_fb.AssociatedFileT()label_file.name = tmp_file.namelabel_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS# Create input infoinput_meta = _metadata_fb.TensorMetadataT()input_meta.name = 'image'input_meta.description = 'Input image to be detected.'input_meta.content = _metadata_fb.ContentT()input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()input_meta.content.contentProperties.colorSpace = _metadata_fb.ColorSpaceType.RGBinput_meta.content.contentPropertiesType = _metadata_fb.ContentProperties.ImageProperties# Create output infooutput1 = _metadata_fb.TensorMetadataT()output1.name = 'output'output1.description = 'Coordinates of detected objects, class labels, and confidence score'output1.associatedFiles = [label_file]if self.model.task == 'segment':output2 = _metadata_fb.TensorMetadataT()output2.name = 'output'output2.description = 'Mask protos'output2.associatedFiles = [label_file]# Create subgraph infosubgraph = _metadata_fb.SubGraphMetadataT()subgraph.inputTensorMetadata = [input_meta]subgraph.outputTensorMetadata = [output1, output2] if self.model.task == 'segment' else [output1]model_meta.subgraphMetadata = [subgraph]b = flatbuffers.Builder(0)b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)metadata_buf = b.Output()populator = _metadata.MetadataPopulator.with_model_file(str(file))populator.load_metadata_buffer(metadata_buf)populator.load_associated_files([str(tmp_file)])populator.populate()tmp_file.unlink()def _pipeline_coreml(self, model, weights_dir=None, prefix=colorstr('CoreML Pipeline:')):"""YOLOv8 CoreML pipeline."""import coremltools as ct  # noqaLOGGER.info(f'{prefix} starting pipeline with coremltools {ct.__version__}...')batch_size, ch, h, w = list(self.im.shape)  # BCHW# Output shapesspec = model.get_spec()out0, out1 = iter(spec.description.output)if MACOS:from PIL import Imageimg = Image.new('RGB', (w, h))  # img(192 width, 320 height)# img = torch.zeros((*opt.img_size, 3)).numpy()  # img size(320,192,3) iDetectionout = model.predict({'image': img})out0_shape = out[out0.name].shapeout1_shape = out[out1.name].shapeelse:  # linux and windows can not run model.predict(), get sizes from pytorch output yout0_shape = self.output_shape[2], self.output_shape[1] - 4  # (3780, 80)out1_shape = self.output_shape[2], 4  # (3780, 4)# Checksnames = self.metadata['names']nx, ny = spec.description.input[0].type.imageType.width, spec.description.input[0].type.imageType.heightna, nc = out0_shape# na, nc = out0.type.multiArrayType.shape  # number anchors, classesassert len(names) == nc, f'{len(names)} names found for nc={nc}'  # check# Define output shapes (missing)out0.type.multiArrayType.shape[:] = out0_shape  # (3780, 80)out1.type.multiArrayType.shape[:] = out1_shape  # (3780, 4)# spec.neuralNetwork.preprocessing[0].featureName = '0'# Flexible input shapes# from coremltools.models.neural_network import flexible_shape_utils# s = [] # shapes# s.append(flexible_shape_utils.NeuralNetworkImageSize(320, 192))# s.append(flexible_shape_utils.NeuralNetworkImageSize(640, 384))  # (height, width)# flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='image', sizes=s)# r = flexible_shape_utils.NeuralNetworkImageSizeRange()  # shape ranges# r.add_height_range((192, 640))# r.add_width_range((192, 640))# flexible_shape_utils.update_image_size_range(spec, feature_name='image', size_range=r)# Print# print(spec.description)# Model from specmodel = ct.models.MLModel(spec, weights_dir=weights_dir)# 3. Create NMS protobufnms_spec = ct.proto.Model_pb2.Model()nms_spec.specificationVersion = 5for i in range(2):decoder_output = model._spec.description.output[i].SerializeToString()nms_spec.description.input.add()nms_spec.description.input[i].ParseFromString(decoder_output)nms_spec.description.output.add()nms_spec.description.output[i].ParseFromString(decoder_output)nms_spec.description.output[0].name = 'confidence'nms_spec.description.output[1].name = 'coordinates'output_sizes = [nc, 4]for i in range(2):ma_type = nms_spec.description.output[i].type.multiArrayTypema_type.shapeRange.sizeRanges.add()ma_type.shapeRange.sizeRanges[0].lowerBound = 0ma_type.shapeRange.sizeRanges[0].upperBound = -1ma_type.shapeRange.sizeRanges.add()ma_type.shapeRange.sizeRanges[1].lowerBound = output_sizes[i]ma_type.shapeRange.sizeRanges[1].upperBound = output_sizes[i]del ma_type.shape[:]nms = nms_spec.nonMaximumSuppressionnms.confidenceInputFeatureName = out0.name  # 1x507x80nms.coordinatesInputFeatureName = out1.name  # 1x507x4nms.confidenceOutputFeatureName = 'confidence'nms.coordinatesOutputFeatureName = 'coordinates'nms.iouThresholdInputFeatureName = 'iouThreshold'nms.confidenceThresholdInputFeatureName = 'confidenceThreshold'nms.iouThreshold = 0.45nms.confidenceThreshold = 0.25nms.pickTop.perClass = Truenms.stringClassLabels.vector.extend(names.values())nms_model = ct.models.MLModel(nms_spec)# 4. Pipeline models togetherpipeline = ct.models.pipeline.Pipeline(input_features=[('image', ct.models.datatypes.Array(3, ny, nx)),('iouThreshold', ct.models.datatypes.Double()),('confidenceThreshold', ct.models.datatypes.Double())],output_features=['confidence', 'coordinates'])pipeline.add_model(model)pipeline.add_model(nms_model)# Correct datatypespipeline.spec.description.input[0].ParseFromString(model._spec.description.input[0].SerializeToString())pipeline.spec.description.output[0].ParseFromString(nms_model._spec.description.output[0].SerializeToString())pipeline.spec.description.output[1].ParseFromString(nms_model._spec.description.output[1].SerializeToString())# Update metadatapipeline.spec.specificationVersion = 5pipeline.spec.description.metadata.userDefined.update({'IoU threshold': str(nms.iouThreshold),'Confidence threshold': str(nms.confidenceThreshold)})# Save the modelmodel = ct.models.MLModel(pipeline.spec, weights_dir=weights_dir)model.input_description['image'] = 'Input image'model.input_description['iouThreshold'] = f'(optional) IOU threshold override (default: {nms.iouThreshold})'model.input_description['confidenceThreshold'] = \f'(optional) Confidence threshold override (default: {nms.confidenceThreshold})'model.output_description['confidence'] = 'Boxes × Class confidence (see user-defined metadata "classes")'model.output_description['coordinates'] = 'Boxes × [x, y, width, height] (relative to image size)'LOGGER.info(f'{prefix} pipeline success')return modeldef add_callback(self, event: str, callback):"""Appends the given callback."""self.callbacks[event].append(callback)def run_callbacks(self, event: str):"""Execute all callbacks for a given event."""for callback in self.callbacks.get(event, []):callback(self)class iOSDetectModel(torch.nn.Module):"""Wrap an Ultralytics YOLO model for iOS export."""def __init__(self, model, im):"""Initialize the iOSDetectModel class with a YOLO model and example image."""super().__init__()b, c, h, w = im.shape  # batch, channel, height, widthself.model = modelself.nc = len(model.names)  # number of classesif w == h:self.normalize = 1.0 / w  # scalarelse:self.normalize = torch.tensor([1.0 / w, 1.0 / h, 1.0 / w, 1.0 / h])  # broadcast (slower, smaller)def forward(self, x):"""Normalize predictions of object detection model with input size-dependent factors."""xywh, cls = self.model(x)[0].transpose(0, 1).split((4, self.nc), 1)return cls, xywh * self.normalize  # confidence (3780, 80), coordinates (3780, 4)def export(cfg=DEFAULT_CFG):"""Export a YOLOv model to a specific format."""cfg.model = cfg.model or 'yolov8n.yaml'cfg.format = cfg.format or 'torchscript'from ultralytics import YOLOmodel = YOLO(cfg.model)model.export(**vars(cfg))if __name__ == '__main__':"""CLI:yolo mode=export model=yolov8n.yaml format=onnx"""export()

 3. 转onnx

yolo export model=/your_path/best.pt format=rknn

二:ONNX转RKNN

方法同yolov8的onnx转rknn,参考链接如下

https://blog.csdn.net/weixin_49824703/article/details/140180413?spm=1001.2014.3001.5502

 

三:后处理代码(粗糙版)

import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknnlite.api import RKNNLite
from math import expONNX_MODEL = './yolov10n_zq.onnx'
RKNN_MODEL = './yolov10n_zq.rknn'
DATASET = './dataset.txt'QUANTIZE_ON = TrueCLASSES = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light','fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow','elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee','skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard','tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple','sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch','potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone','microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear','hair drier', 'toothbrush']meshgrid = []class_num = len(CLASSES)
head_num = 3
strides = [8, 16, 32]
map_size = [[80, 80], [40, 40], [20, 20]]
object_thresh = 0.25input_height = 640
input_width = 640topK = 50class DetectBox:def __init__(self, classId, score, xmin, ymin, xmax, ymax):self.classId = classIdself.score = scoreself.xmin = xminself.ymin = yminself.xmax = xmaxself.ymax = ymaxdef GenerateMeshgrid():for index in range(head_num):for i in range(map_size[index][0]):for j in range(map_size[index][1]):meshgrid.append(j + 0.5)meshgrid.append(i + 0.5)def TopK(detectResult):if len(detectResult) <= topK:return detectResultelse:predBoxs = []sort_detectboxs = sorted(detectResult, key=lambda x: x.score, reverse=True)for i in range(topK):predBoxs.append(sort_detectboxs[i])return predBoxsdef sigmoid(x):return 1 / (1 + exp(-x))def postprocess(out, img_h, img_w):print('postprocess ... ')detectResult = []output = []for i in range(len(out)):output.append(out[i].reshape((-1)))scale_h = img_h / input_heightscale_w = img_w / input_widthgridIndex = -2cls_index = 0cls_max = 0for index in range(head_num):reg = output[index * 2 + 0]cls = output[index * 2 + 1]for h in range(map_size[index][0]):for w in range(map_size[index][1]):gridIndex += 2if 1 == class_num:cls_max = sigmoid(cls[0 * map_size[index][0] * map_size[index][1] + h * map_size[index][1] + w])cls_index = 0else:for cl in range(class_num):cls_val = cls[cl * map_size[index][0] * map_size[index][1] + h * map_size[index][1] + w]if 0 == cl:cls_max = cls_valcls_index = clelse:if cls_val > cls_max:cls_max = cls_valcls_index = clcls_max = sigmoid(cls_max)if cls_max > object_thresh:regdfl = []for lc in range(4):sfsum = 0locval = 0for df in range(16):temp = exp(reg[((lc * 16) + df) * map_size[index][0] * map_size[index][1] + h * map_size[index][1] + w])reg[((lc * 16) + df) * map_size[index][0] * map_size[index][1] + h * map_size[index][ 1] + w] = tempsfsum += tempfor df in range(16):sfval = reg[((lc * 16) + df) * map_size[index][0] * map_size[index][1] + h * map_size[index][1] + w] / sfsumlocval += sfval * dfregdfl.append(locval)x1 = (meshgrid[gridIndex + 0] - regdfl[0]) * strides[index]y1 = (meshgrid[gridIndex + 1] - regdfl[1]) * strides[index]x2 = (meshgrid[gridIndex + 0] + regdfl[2]) * strides[index]y2 = (meshgrid[gridIndex + 1] + regdfl[3]) * strides[index]xmin = x1 * scale_wymin = y1 * scale_hxmax = x2 * scale_wymax = y2 * scale_hxmin = xmin if xmin > 0 else 0ymin = ymin if ymin > 0 else 0xmax = xmax if xmax < img_w else img_wymax = ymax if ymax < img_h else img_hbox = DetectBox(cls_index, cls_max, xmin, ymin, xmax, ymax)detectResult.append(box)# topKprint('before topK num is:', len(detectResult))predBox = TopK(detectResult)return predBoxdef export_rknn_inference(img):# Create RKNN objectrknn = RKNNLite()ret = rknn.load_rknn("model/yolov10n_zq.rknn")# Init runtime environmentif ret != 0:print('Load RKNNLite model failed')exit(ret)print('done')# 初始化运行环境print('--> Init runtime environment')ret = rknn.init_runtime()if ret != 0:print('Init runtime environment failed')exit(ret)print('done')# Inferenceprint('--> Running model')outputs = rknn.inference(inputs=[img])rknn.release()print('done')return outputsif __name__ == '__main__':print('This is main ...')GenerateMeshgrid()img_path = '622c9d4fa0e3043b8dbb1caa68e7e4b.jpg'src_img = cv2.imread(img_path)img_h, img_w = src_img.shape[:2]input_img = cv2.resize(src_img, (input_width, input_height))input_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2RGB)input_img = np.expand_dims(input_img, 0)outputs = export_rknn_inference(input_img)out = []for i in range(len(outputs)):out.append(outputs[i])predbox = postprocess(out, img_h, img_w)print(len(predbox))for i in range(len(predbox)):xmin = int(predbox[i].xmin)ymin = int(predbox[i].ymin)xmax = int(predbox[i].xmax)ymax = int(predbox[i].ymax)classId = predbox[i].classIdscore = predbox[i].scorecv2.rectangle(src_img, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)title = CLASSES[classId] + ":%.2f" % (score)cv2.putText(src_img, title, (xmin, ymin), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2, cv2.LINE_AA)cv2.imwrite('test_rknn_result.jpg', src_img)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/3268692.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

(精校版)高校大数据实验室建设解决方案

在当今数据驱动的时代&#xff0c;大数据已成为推动社会发展的核心动力。高校作为培养未来社会精英和科技创新人才的摇篮&#xff0c;迫切需要建设大数据实验室&#xff0c;以应对日益增长的大数据人才需求和科学研究挑战。大数据实验室不仅能够提供先进的教学资源和实践平台&a…

mysql面试(七)

前言 本章节列出了mysql在增删改查的时候&#xff0c;分别会涉及到哪些锁类型&#xff0c;又是如何交互的。 这个章节也是mysql面试基础系列的最后一章&#xff0c;后面准备更新redis数据类型和分布式锁相关问题。如果各位看官有什么问题的话&#xff0c;可以留言。 锁 之前…

leetocde662. 二叉树最大宽度,面试必刷题,思路清晰,分点解析,附代码详解带你完全弄懂

leetocde662. 二叉树最大宽度 做此题之前可以先做一下二叉树的层序遍历。具体题目如下&#xff1a; leetcode102二叉树的层序遍历 我也写过题解&#xff0c;可以先看看学习一下&#xff0c;如果会做层序遍历了&#xff0c;那么这题相对来说会简单很多。 具体题目 给你一棵…

Vue3+Element Plus 实现table表格中input的验证

实现效果 html部分 <template><div class"table"><el-form ref"tableFormRef" :model"form"><el-table :data"form.detailList"><el-table-column type"selection" width"55" align&…

Wonder3D 论文学习

论文链接&#xff1a;https://arxiv.org/abs/2310.15008 代码链接&#xff1a;https://github.com/xxlong0/Wonder3D 解决了什么问题&#xff1f; 随着扩散模型的提出&#xff0c;3D 生成领域取得了长足进步。从单张图片重建出 3D 几何是计算机图形学和 3D 视觉的基础任务&am…

【限免】16PAM、16PSK、16QAM、16CQAM星座图及误码率【附MATLAB代码】

​微信公众号&#xff1a;智能电磁频谱算法 QQ交流群&#xff1a;949444104 主要内容 MATLAB代码 % Parameters M 16; N 4; % Number of circles for CQAM SNR_dB 0:2:25; % Extended SNR range to reach higher values num_symbols 1e5; % Total number of symbols for s…

Linux学习笔记 --- 环境配置

在成功装载Ubuntu系统后我们需要设置其与windows系统的共享文件夹&#xff0c;按照以下步骤操作 设置完共享文件夹后在终端执行以下命令查看是否成功设置 此时下方出现设置的共享文件夹名称则为成功设置 如果未显示可以尝试进行重新安装VMware tools&#xff0c;步骤如下&…

git等常用工具以及cmake

一、将git中的代码克隆进电脑以及常用工具介绍 1.安装git 首先需要安装git sudo apt install git 注意一定要加--recursive&#xff0c;因为文件中有很多“引用文件“&#xff0c;即第三方文件&#xff08;库&#xff09;&#xff0c;加入该选项会将文件中包含的子模…

系统架构设计师②:操作系统

系统架构设计师②&#xff1a;操作系统 操作系统作用 ①管理系统的硬件、软件、数据资源 ②控制程序运行 ③人机之间的接口 ④应用软件与硬件之间的接口 进程管理 进程是程序在一个数据集合上运行的过程&#xff0c;它是系统进行资源分配和调度的一个独立单位。它由程序块、…

FastAPI(七十八)实战开发《在线课程学习系统》接口开发-- 评论

源码见&#xff1a;"fastapi_study_road-learning_system_online_courses: fastapi框架实战之--在线课程学习系统" 梳理下思路 1.判断是否登录 2.课程是否存在 3.如果是回复&#xff0c;查看回复是否存在 4.是否有权限 5.发起评论 首先新增pydantic模型 class Cour…

如何系统的学习C++和自动驾驶算法

给大家分享一下我的学习C和自动驾驶算法视频&#xff0c;收藏订阅都很高。打开下面的链接&#xff0c;就可以看到所有的合集了&#xff0c;订阅一下&#xff0c;下次就能找到了。 【C面试100问】第七十四问&#xff1a;STL中既然有了vector为什么还需要array STL中既然有了vec…

C#如何引用dll动态链接库文件的注释

1、dll动态库文件项目生成属性中要勾选“XML文档文件” 注意&#xff1a;XML文件的名字切勿修改。 2、添加引用时XML文件要与DLL文件在同一个目录下。 3、如果要是添加引用的时候XML不在相同目录下&#xff0c;之后又将XML文件复制到相同的目录下&#xff0c;需要删除引用&am…

VUE3学习第三篇:报错记录

1、在我整理好前端代码框架后&#xff0c;而且也启动好了对应的后台服务&#xff0c;访问页面&#xff0c;正常。 2、报错ReferenceError: defineModel is not defined 学到这里报错了 在vue网站的演练场&#xff0c;使用没问题 但是在我自己的代码里就出问题了 3、watchEffec…

企业公户验证API如何使用JAVA、Python、PHP语言进行应用

在纷繁复杂的金融与商业领域&#xff0c;确保每笔交易的安全与合规是至关重要的。而企业公户验证API&#xff0c;正是这样一位默默守护的数字卫士&#xff0c;它通过智能化的手段&#xff0c;简化了企业对公账户验证流程&#xff0c;让繁琐的审核变得快捷且可靠。 什么是企业公…

【北京迅为】《i.MX8MM嵌入式Linux开发指南》-第三篇 嵌入式Linux驱动开发篇-第五十七章 Linux中断实验

i.MX8MM处理器采用了先进的14LPCFinFET工艺&#xff0c;提供更快的速度和更高的电源效率;四核Cortex-A53&#xff0c;单核Cortex-M4&#xff0c;多达五个内核 &#xff0c;主频高达1.8GHz&#xff0c;2G DDR4内存、8G EMMC存储。千兆工业级以太网、MIPI-DSI、USB HOST、WIFI/BT…

普元开源OBS仓颉版客户端,相较于Java实现桶创建接口平均响应时长缩小46.8%

关于作者&#xff1a;许飞锋&#xff0c;资深软件工程师&#xff0c;参与公司多个核心产品的设计与开发&#xff0c;对中间件相关技术及组件研究较多&#xff0c;对仓颉语言特性及神农框架理解较深入。 01‍ 关于OBS仓颉版客户端 1.1 组件定位 对象存储服务软件开发工具包&…

Canvas生成动画---显示一组彩色气泡

一、JS版本 <!--* Author: LYM* Date: 2024-07-26 13:51:47* LastEditors: LYM* LastEditTime: 2024-07-26 16:14:40* Description: Please set Description --> <!DOCTYPE html> <html> <head><title>canvas动态气泡</title><style&g…

Spring Boot的Web开发

目录 Spring Boot的Web开发 1.静态资源映射规则 第一种静态资源映射规则 2.enjoy模板引擎 3.springMVC 3.1请求处理 RequestMapping DeleteMapping 删除 PutMapping 修改 GetMapping 查询 PostMapping 新增 3.2参数绑定 一.支持数据类型: 3.3常用注解 一.Request…

Spark+实例解读

第一部分 Spark入门 学习教程&#xff1a;Spark 教程 | Spark 教程 Spark 集成了许多大数据工具&#xff0c;例如 Spark 可以处理任何 Hadoop 数据源&#xff0c;也能在 Hadoop 集群上执行。大数据业内有个共识认为&#xff0c;Spark 只是Hadoop MapReduce 的扩展&#xff08…

22 Python常用内置函数——枚举

enumerate() 函数用来枚举可迭代对象中的元素&#xff0c;返回可迭代的 enumerate 对象&#xff0c;其中每个元素都是包含索引和值的元组。 print(enumerate(abcd)) print(list(enumerate(abcd))) # 枚举字符串中的元素 print(list(enumerate([hello, world]))) # 枚举列表中…