【RT-DETR有效改进】利用YOLOv9的GELAN模块替换RepC3结构(附轻量化版本 + 高效涨点版本 + 手撕结构图)

 一、本文介绍

本文给大家带来的改进机制是利用2024/02/21号最新发布的YOLOv9其中提出的GELAN模块来改进RT-DETR的RepC3结构GELAN融合了CSPNet和ELAN机制同时其中利用到了RepConv在获取更多有效特征的同时在推理时专用单分支结构从而不影响推理速度,同时本文的内容提供了两种版本一种是参数量更低涨点效果略微弱一些的版本(参数量HGNet-l版本下降850w参数,计算量为77GFLOPs),另一种是参数量稍多一些但是效果要不参数量低的效果要好一些(均为我个人整理),提供两种版本是为了适配不同需求的读者,具体选择那种大家可以根据自身的需求来选择即可,文章中我都均已提供,同时本文的结构存在大量的二次创新机会,后面我也会提供。

专栏目录: RT-DETR改进有效系列目录 | 包含卷积、主干、RepC3、注意力机制、Neck上百种创新机制

专栏链接:RT-DETR剑指论文专栏,持续复现各种顶会内容——论文收割机RT-DETR    

二、GELAN的原理

2.1 Generalized ELAN

在本节中,我们描述了提出的新网络架构 - GELAN。通过结合两种神经网络架构CSPNet和ELAN,这两种架构都是以梯度路径规划设计的,我们设计了考虑了轻量级、推理速度和准确性的广义高效层聚合网络(GELAN)。其整体架构如图4所示。我们推广了ELAN的能力,ELAN原本只使用卷积层的堆叠,到一个新的架构,可以使用任何计算块。

这张图(图4)展示了广义高效层聚合网络(GELAN)的架构,以及它是如何从CSPNet和ELAN这两种神经网络架构演变而来的。这两种架构都设计有梯度路径规划。

a) CSPNet:在CSPNet的架构中,输入通过一个转换层被分割为两部分,然后分别通过任意的计算块。之后,这些分支被重新合并(通过concatenation),并再次通过转换层。

b) ELAN:与CSPNet相比,ELAN采用了堆叠的卷积层,其中每一层的输出都会与下一层的输入相结合,再经过卷积处理。

c) GELAN:结合了CSPNet和ELAN的设计,提出了GELAN。它采用了CSPNet的分割和重组的概念,并在每一部分引入了ELAN的层级卷积处理方式。不同之处在于GELAN不仅使用卷积层,还可以使用任何计算块,使得网络更加灵活,能够根据不同的应用需求定制。

GELAN的设计考虑到了轻量化、推理速度和精确度,以此来提高模型的整体性能。图中显示的模块和分区的可选性进一步增加了网络的适应性和可定制性。GELAN的这种结构允许它支持多种类型的计算块,这使得它可以更好地适应各种不同的计算需求和硬件约束。

总的来说,GELAN的架构是为了提供一个更加通用和高效的网络,可以适应从轻量级到复杂的深度学习任务,同时保持或增强计算效率和性能。通过这种方式,GELAN旨在解决现有架构的限制,提供一个可扩展的解决方案,以适应未来深度学习的发展。

大家看图片一眼就能看出来它融合了什么,就是将CSPHet的anyBlock模块堆叠的方式和ELAN融合到了一起。


2.2 Generalized ELAN结构图

YOLOv9最主要的创新目前能够得到的就是其中的GELAN结构,我也是分析其代码根据论文将其结构图绘画出来。

下面的文件为YOLOv9的yaml文件。可以看到的是其提出了一种结构名字RepNCSPELAN4,其中的结构图concat后的通道数我没有画是因为它有计算中间的参数的变量是根据个人设置来的。 

其代码和结构图如下所示!

class RepNCSPELAN4(nn.Module):# csp-elandef __init__(self, c1, c2, c5=1):  # c5 = repeatsuper().__init__()c3 = int(c2 / 2)c4 = int(c3 / 2)self.c = c3 // 2self.cv1 = Conv(c1, c3, 1, 1)self.cv2 = nn.Sequential(RepNCSP(c3 // 2, c4, c5), Conv(c4, c4, 3, 1))self.cv3 = nn.Sequential(RepNCSP(c4, c4, c5), Conv(c4, c4, 3, 1))self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)def forward(self, x):y = list(self.cv1(x).chunk(2, 1))y.extend((m(y[-1])) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))def forward_split(self, x):y = list(self.cv1(x).split((self.c, self.c), 1))y.extend(m(y[-1]) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))


三、核心代码 

核心代码的使用方式看章节四!

import torch
import torch.nn as nn
import numpy as np__all__ = ['RepNCSPELAN4', 'RepNCSPELAN4_high']class RepConvN(nn.Module):"""RepConv is a basic rep-style block, including training and deploy statusThis code is based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py"""default_act = nn.SiLU()  # default activationdef __init__(self, c1, c2, k=3, s=1, p=1, g=1, d=1, act=True, bn=False, deploy=False):super().__init__()assert k == 3 and p == 1self.g = gself.c1 = c1self.c2 = c2self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()self.bn = Noneself.conv1 = Conv(c1, c2, k, s, p=p, g=g, act=False)self.conv2 = Conv(c1, c2, 1, s, p=(p - k // 2), g=g, act=False)def forward_fuse(self, x):"""Forward process"""return self.act(self.conv(x))def forward(self, x):"""Forward process"""id_out = 0 if self.bn is None else self.bn(x)return self.act(self.conv1(x) + self.conv2(x) + id_out)def get_equivalent_kernel_bias(self):kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1)kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2)kernelid, biasid = self._fuse_bn_tensor(self.bn)return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasiddef _avg_to_3x3_tensor(self, avgp):channels = self.c1groups = self.gkernel_size = avgp.kernel_sizeinput_dim = channels // groupsk = torch.zeros((channels, input_dim, kernel_size, kernel_size))k[np.arange(channels), np.tile(np.arange(input_dim), groups), :, :] = 1.0 / kernel_size ** 2return kdef _pad_1x1_to_3x3_tensor(self, kernel1x1):if kernel1x1 is None:return 0else:return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])def _fuse_bn_tensor(self, branch):if branch is None:return 0, 0if isinstance(branch, Conv):kernel = branch.conv.weightrunning_mean = branch.bn.running_meanrunning_var = branch.bn.running_vargamma = branch.bn.weightbeta = branch.bn.biaseps = branch.bn.epselif isinstance(branch, nn.BatchNorm2d):if not hasattr(self, 'id_tensor'):input_dim = self.c1 // self.gkernel_value = np.zeros((self.c1, input_dim, 3, 3), dtype=np.float32)for i in range(self.c1):kernel_value[i, i % input_dim, 1, 1] = 1self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)kernel = self.id_tensorrunning_mean = branch.running_meanrunning_var = branch.running_vargamma = branch.weightbeta = branch.biaseps = branch.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef fuse_convs(self):if hasattr(self, 'conv'):returnkernel, bias = self.get_equivalent_kernel_bias()self.conv = nn.Conv2d(in_channels=self.conv1.conv.in_channels,out_channels=self.conv1.conv.out_channels,kernel_size=self.conv1.conv.kernel_size,stride=self.conv1.conv.stride,padding=self.conv1.conv.padding,dilation=self.conv1.conv.dilation,groups=self.conv1.conv.groups,bias=True).requires_grad_(False)self.conv.weight.data = kernelself.conv.bias.data = biasfor para in self.parameters():para.detach_()self.__delattr__('conv1')self.__delattr__('conv2')if hasattr(self, 'nm'):self.__delattr__('nm')if hasattr(self, 'bn'):self.__delattr__('bn')if hasattr(self, 'id_tensor'):self.__delattr__('id_tensor')class RepNBottleneck(nn.Module):# Standard bottleneckdef __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5):  # ch_in, ch_out, shortcut, kernels, groups, expandsuper().__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = RepConvN(c1, c_, k[0], 1)self.cv2 = Conv(c_, c2, k[1], 1, g=g)self.add = shortcut and c1 == c2def forward(self, x):return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))class RepNCSP(nn.Module):# CSP Bottleneck with 3 convolutionsdef __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansionsuper().__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(2 * c_, c2, 1)  # optional act=FReLU(c2)self.m = nn.Sequential(*(RepNBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))def forward(self, x):return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))def autopad(k, p=None, d=1):  # kernel, padding, dilation# Pad to 'same' shape outputsif d > 1:k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-sizeif p is None:p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-padreturn pclass Conv(nn.Module):# Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)default_act = nn.SiLU()  # default activationdef __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):super().__init__()self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)self.bn = nn.BatchNorm2d(c2)self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()def forward(self, x):return self.act(self.bn(self.conv(x)))def forward_fuse(self, x):return self.act(self.conv(x))class RepNCSPELAN4(nn.Module):# csp-elandef __init__(self, c1, c2, c5=1):  # c5 = repeatsuper().__init__()c3 = int(c2 / 2)c4 = int(c3 / 2)self.c = c3 // 2self.cv1 = Conv(c1, c3, 1, 1)self.cv2 = nn.Sequential(RepNCSP(c3 // 2, c4, c5), Conv(c4, c4, 3, 1))self.cv3 = nn.Sequential(RepNCSP(c4, c4, c5), Conv(c4, c4, 3, 1))self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)def forward(self, x):y = list(self.cv1(x).chunk(2, 1))y.extend((m(y[-1])) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))def forward_split(self, x):y = list(self.cv1(x).split((self.c, self.c), 1))y.extend(m(y[-1]) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))class RepNCSPELAN4_high(nn.Module):# csp-elandef __init__(self, c1, c2, c5=1):  # c5 = repeatsuper().__init__()c3 = c2c4 = int(c3 / 2)self.c = c3 // 2self.cv1 = Conv(c1, c3, 1, 1)self.cv2 = nn.Sequential(RepNCSP(c3 // 2, c4, c5), Conv(c4, c4, 3, 1))self.cv3 = nn.Sequential(RepNCSP(c4, c4, c5), Conv(c4, c4, 3, 1))self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)def forward(self, x):y = list(self.cv1(x).chunk(2, 1))y.extend((m(y[-1])) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))def forward_split(self, x):y = list(self.cv1(x).split((self.c, self.c), 1))y.extend(m(y[-1]) for m in [self.cv2, self.cv3])return self.cv4(torch.cat(y, 1))if __name__ == "__main__":# Generating Sample imageimage_size = (1, 24, 224, 224)image = torch.rand(*image_size)# Modelmobilenet_v1 = RepNCSPELAN4(24, 24, 128, 64, 1)out = mobilenet_v1(image)print(out.size())


四、手把手教你添加GELAN机制 

4.1 修改一

第一还是建立文件,我们找到如下ultralytics/nn/modules文件夹下建立一个目录名字呢就是'Addmodules'文件夹(用群内的文件的话已经有了无需新建)!然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二 

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'(用群内的文件的话已经有了无需新建),然后在其内部导入我们的检测头如下图所示。


4.3 修改三 

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块(用群内的文件的话已经有了无需重新导入直接开始第四步即可)

从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!


4.4 修改四 

按照我的添加在parse_model里添加即可。

到此就修改完成了,大家可以复制下面的yaml文件运行。


五、GELAN的yaml文件和运行记录

5.1 GELAN低参数量版本的yaml文件

5.1.1 ResNet18版本

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 0-P1- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 1- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 2- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 3-P2- [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 4- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 5-P3- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 6-P4- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 7-P5head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 8 input_proj.2- [-1, 1, AIFI, [1024, 8]]- [-1, 1, Conv, [256, 1, 1]]  # 10, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 11- [6, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 12 input_proj.1- [[-2, -1], 1, Concat, [1]]- [-1, 3, RepNCSPELAN4, [256]]  # 14, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]]  # 15, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 16- [5, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 17 input_proj.0- [[-2, -1], 1, Concat, [1]]  # 18 cat backbone P4- [-1, 3, RepNCSPELAN4, [256]]  # X3 (19), fpn_blocks.1- [-1, 1, Conv, [256, 3, 2]]  # 20, downsample_convs.0- [[-1, 15], 1, Concat, [1]]  # 21 cat Y4- [-1, 3, RepNCSPELAN4, [256]]  # F4 (22), pan_blocks.0- [-1, 1, Conv, [256, 3, 2]]  # 23, downsample_convs.1- [[-1, 10], 1, Concat, [1]]  # 24 cat Y5- [-1, 3, RepNCSPELAN4, [256]]  # F5 (25), pan_blocks.1- [[19, 22, 25], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]]  # Detect(P3, P4, P5)

 


5.1.2 ResNet50版本

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 0-P1- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 1- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 2- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 3-P2- [-1, 3, Blocks, [64,  BottleNeck, 2, False]] # 4- [-1, 4, Blocks, [128, BottleNeck, 3, False]] # 5-P3- [-1, 6, Blocks, [256, BottleNeck, 4, False]] # 6-P4- [-1, 3, Blocks, [512, BottleNeck, 5, False]] # 7-P5head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 8 input_proj.2- [-1, 1, AIFI, [1024, 8]] # 9- [-1, 1, Conv, [256, 1, 1]]  # 10, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 11- [6, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 12 input_proj.1- [[-2, -1], 1, Concat, [1]] # 13- [-1, 3, RepNCSPELAN4, [256]]  # 14, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]]   # 15, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 16- [5, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 17 input_proj.0- [[-2, -1], 1, Concat, [1]]  # 18 cat backbone P4- [-1, 3, RepNCSPELAN4, [256]]    # X3 (19), fpn_blocks.1- [-1, 1, Conv, [256, 3, 2]]   # 20, downsample_convs.0- [[-1, 15], 1, Concat, [1]]  # 21 cat Y4- [-1, 3, RepNCSPELAN4, [256]]    # F4 (22), pan_blocks.0- [-1, 1, Conv, [256, 3, 2]]   # 23, downsample_convs.1- [[-1, 10], 1, Concat, [1]]  # 24 cat Y5- [-1, 3, RepNCSPELAN4, [256]]    # F5 (25), pan_blocks.1- [[19, 22, 25], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 6]]  # Detect(P3, P4, P5)

 


5.1.3 HGNetv2版本 

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, HGStem, [32, 48]]  # 0-P2/4- [-1, 6, HGBlock, [48, 128, 3]]  # stage 1- [-1, 1, DWConv, [128, 3, 2, 1, False]]  # 2-P3/8- [-1, 6, HGBlock, [96, 512, 3]]  # stage 2- [-1, 1, DWConv, [512, 3, 2, 1, False]]  # 4-P3/16- [-1, 6, HGBlock, [192, 1024, 5, True, False]]  # cm, c2, k, light, shortcut- [-1, 6, HGBlock, [192, 1024, 5, True, True]]- [-1, 6, HGBlock, [192, 1024, 5, True, True]]  # stage 3- [-1, 1, DWConv, [1024, 3, 2, 1, False]]  # 8-P4/32- [-1, 6, HGBlock, [384, 2048, 5, True, False]]  # stage 4head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 10 input_proj.2- [-1, 1, AIFI, [1024, 8]]- [-1, 1, Conv, [256, 1, 1]]  # 12, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [7, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 14 input_proj.1- [[-2, -1], 1, Concat, [1]]- [-1, 3, RepNCSPELAN4, [256]]  # 16, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]]  # 17, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 19 input_proj.0- [[-2, -1], 1, Concat, [1]]  # cat backbone P4- [-1, 3, RepNCSPELAN4, [256]]  # X3 (21), fpn_blocks.1- [-1, 1, DiverseBranchBlock, [384, 3, 2]]  # 22, downsample_convs.0- [[-1, 17], 1, Concat, [1]]  # cat Y4- [-1, 3, RepNCSPELAN4, [256]]  # F4 (24), pan_blocks.0- [-1, 1, DiverseBranchBlock, [384, 3, 2]]  # 25, downsample_convs.1- [[-1, 12], 1, Concat, [1]]  # cat Y5- [-1, 3, RepNCSPELAN4, [256]]  # F5 (27), pan_blocks.1- [[21, 24, 27], 1, RTDETRDecoder, [nc]]  # Detect(P3, P4, P5)

 


5.2  GELAN高参数量版本的yaml文件

5.2.1 ResNet18版本

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80  # number of classes
scales:  # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPss: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPsm: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPsl: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2- [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4- [-1, 1, RepNCSPELAN4_high, [128]]- [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8- [-1, 1, RepNCSPELAN4_high, [256]]- [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16- [-1, 1, RepNCSPELAN4_high, [512]]- [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32- [-1, 1, RepNCSPELAN4_high, [1024]]- [-1, 1, SPPF, [1024, 5]]  # 9# YOLOv8.0n head
head:- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 6], 1, Concat, [1]]  # cat backbone P4- [-1, 1, RepNCSPELAN4_high, [512]]  # 12- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 4], 1, Concat, [1]]  # cat backbone P3- [-1, 1, RepNCSPELAN4_high, [256]]  # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]]  # cat head P4- [-1, 1, RepNCSPELAN4_high, [512]]  # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]]  # cat head P5- [-1, 1, RepNCSPELAN4_high, [1024]]  # 21 (P5/32-large)- [[15, 18, 21], 1, Detect, [nc]]  # Detect(P3, P4, P5)

5.2.2 ResNet50版本

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 0-P1- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 1- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 2- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 3-P2- [-1, 3, Blocks, [64,  BottleNeck, 2, False]] # 4- [-1, 4, Blocks, [128, BottleNeck, 3, False]] # 5-P3- [-1, 6, Blocks, [256, BottleNeck, 4, False]] # 6-P4- [-1, 3, Blocks, [512, BottleNeck, 5, False]] # 7-P5head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 8 input_proj.2- [-1, 1, AIFI, [1024, 8]] # 9- [-1, 1, Conv, [256, 1, 1]]  # 10, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 11- [6, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 12 input_proj.1- [[-2, -1], 1, Concat, [1]] # 13- [-1, 3, RepNCSPELAN4_high, [256]]  # 14, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]]   # 15, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 16- [5, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 17 input_proj.0- [[-2, -1], 1, Concat, [1]]  # 18 cat backbone P4- [-1, 3, RepNCSPELAN4_high, [256]]    # X3 (19), fpn_blocks.1- [-1, 1, Conv, [256, 3, 2]]   # 20, downsample_convs.0- [[-1, 15], 1, Concat, [1]]  # 21 cat Y4- [-1, 3, RepNCSPELAN4_high, [256]]    # F4 (22), pan_blocks.0- [-1, 1, Conv, [256, 3, 2]]   # 23, downsample_convs.1- [[-1, 10], 1, Concat, [1]]  # 24 cat Y5- [-1, 3, RepNCSPELAN4_high, [256]]    # F5 (25), pan_blocks.1- [[19, 22, 25], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 6]]  # Detect(P3, P4, P5)

 

 


5.2.3 HGNetv2版本 

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, HGStem, [32, 48]]  # 0-P2/4- [-1, 6, HGBlock, [48, 128, 3]]  # stage 1- [-1, 1, DWConv, [128, 3, 2, 1, False]]  # 2-P3/8- [-1, 6, HGBlock, [96, 512, 3]]  # stage 2- [-1, 1, DWConv, [512, 3, 2, 1, False]]  # 4-P3/16- [-1, 6, HGBlock, [192, 1024, 5, True, False]]  # cm, c2, k, light, shortcut- [-1, 6, HGBlock, [192, 1024, 5, True, True]]- [-1, 6, HGBlock, [192, 1024, 5, True, True]]  # stage 3- [-1, 1, DWConv, [1024, 3, 2, 1, False]]  # 8-P4/32- [-1, 6, HGBlock, [384, 2048, 5, True, False]]  # stage 4head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 10 input_proj.2- [-1, 1, AIFI, [1024, 8]]- [-1, 1, Conv, [256, 1, 1]]  # 12, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [7, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 14 input_proj.1- [[-2, -1], 1, Concat, [1]]- [-1, 3, RepNCSPELAN4_high, [256]]  # 16, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]]  # 17, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 19 input_proj.0- [[-2, -1], 1, Concat, [1]]  # cat backbone P4- [-1, 3, RepNCSPELAN4_high, [256]]  # X3 (21), fpn_blocks.1- [-1, 1, DiverseBranchBlock, [384, 3, 2]]  # 22, downsample_convs.0- [[-1, 17], 1, Concat, [1]]  # cat Y4- [-1, 3, RepNCSPELAN4_high, [256]]  # F4 (24), pan_blocks.0- [-1, 1, DiverseBranchBlock, [384, 3, 2]]  # 25, downsample_convs.1- [[-1, 12], 1, Concat, [1]]  # cat Y5- [-1, 3, RepNCSPELAN4_high, [256]]  # F5 (27), pan_blocks.1- [[21, 24, 27], 1, RTDETRDecoder, [nc]]  # Detect(P3, P4, P5)

5.3 训练代码 

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLOif __name__ == '__main__':model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')# model.load('yolov8n.pt') # loading pretrain weightsmodel.train(data=r'替换数据集yaml文件地址',# 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, posecache=False,imgsz=640,epochs=150,single_cls=False,  # 是否是单类别检测batch=4,close_mosaic=10,workers=0,device='0',optimizer='SGD', # using SGD# resume='', # 如过想续训就设置last.pt的地址amp=False,  # 如果出现训练损失为Nan可以关闭ampproject='runs/train',name='exp',)


5.3 GELAN的训练过程截图 

5.3.1 低参数量版本


5.3.2 高参数量版本


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv8改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~

专栏链接:RT-DETR剑指论文专栏,持续复现各种顶会内容——论文收割机RT-DETR    

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/2804873.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

(九)springmvc+mybatis+dubbo+zookeeper分布式架构 整合 - maven构建ant-framework核心代码Base封装

今天重点讲解的是ant-framework核心代码Base封装过程。 因为涉及到springmvc、mybatis的集成,为了使项目编码更简洁易用,这边将基础的BASE进行封装,其中包括:BaseBean、BaseDao、BaseService、CRUD的基础封装、分页组件的封装、m…

c++ qt五子棋联网对战游戏

C qt 五子棋联网对战游戏运行环境 Qt 6.6.0 (MSVC 2019 64-bit) 代码文件编码格式 ANSI txt文件编码格式 ANSI 测试用例 服务端端口被占用 通过客户端端口被占用 通过客户端连接服务端 服务端中途断开 通过客户端连接服务端 客户端中途断开 通过服务端没有启动 客户端启动…

【电子书】云计算_大数据

资料 wx:1945423050,备注来源和目的 个人整理了一些互联网电子书 云计算_大数据 34招精通商业智能数据分析:Power BI和Tableau进阶实战.epubCloudera Hadoop大数据平台实战指南.epubDocker实战.epubDocker技术入门与实战 第2版.epubDocker技…

华为HCIP Datacom H12-831 卷23

单选题 1、某园区部署IS-IS实现网络互通,在所有IS-IS路由器的进程中配置命令flash-flood 6 max-timer-interval 100 Leve1-2,则以下关于该场景的描述,正确的是哪—项? A、若某IS-IS路由器LSDB内更新的LSP数量为5,则在100毫秒内且路由计算完成前&#…

yarn install:unable to get local issuer certificate

一、问题描述 今天在Jenkins上发布项目时,遇到一个报错: error Error: unable to get local issuer certificateat TLSSocket.onConnectSecure (node:_tls_wrap:1535:34)at TLSSocket.emit (node:events:513:28)at TLSSocket._finishInit (node:_tls_w…

Conmi的正确答案——将JAVA中maven的.m2文件夹放到D盘

系统:WIN11 1、将.m2文件夹移动到D盘 移动后: 2、创建目录链接 mklink /j "C:\Users\Administrator\.m2" "D:\.m2"至此,maven默认的jar包会加载到D盘的.m2文件夹

RisingWave最佳实践-利用Dynamic filters 和 Temporal filters 实现监控告警

心得的体会 刚过了年刚开工,闲暇之余调研了分布式SQL流处理数据库–RisingWave,本人是Flink(包括FlinkSQL和Flink DataStream API)的资深用户,但接触到RisingWave令我眼前一亮,并且拿我们生产上的监控告警…

【Docker】免费使用的腾讯云容器镜像服务

需要云服务器等云产品来学习Linux可以移步/-->腾讯云<--/官网&#xff0c;轻量型云服务器低至112元/年&#xff0c;新用户首次下单享超低折扣。 目录 1、设置密码 2、登录实例&#xff08;sudo docker login xxxxxx&#xff09; 3、新建命名空间&#xff08;每个命名空…

C# 1.消息队列MQ使用场景--图文解析

为什么使用消息队列MQ&#xff08;Message Queue&#xff09;&#xff1f; 消息队列有什么优点和缺点&#xff1f; Kafka(大数据日志采集)、ActiveMQ(最早的MQ--目前使用较少)、RabbitMQ(开源&#xff0c;中小型企业使用足够)、RocketMQ(阿里开发&#xff0c;大型企业适用) 都…

【Linux网络】网络编程套接字(TCP)

目录 地址转换函数 字符串IP转整数IP 整数IP转字符串IP 关于inet_ntoa 简单的单执行流TCP网络程序 TCP socket API 详解及封装TCP socket 服务端创建套接字 服务端绑定 服务端监听 服务端获取连接 服务端处理请求 客户端创建套接字 客户端连接服务器 客户端…

Translumo:基于.NET开发的开源的屏幕实时翻译工具

推荐一个高级实时屏幕翻译器&#xff0c;可用于游戏、视频实时翻译。 01 项目简介 Translumo是基于.Net开发的、开源屏幕翻译器软件&#xff0c;它可以实时检测并翻译屏幕上所选区域中出现的文本&#xff0c;如视频的字幕和图片中的文字等。 项目架构如下&#xff1a; 02 项…

QT问题 打开Qt Creator发现没有菜单栏

之前不知道按了什么快捷键,当我再次打开Qt Creator时发现菜单栏消失啦 找了许多原因发现:安装有道词典的快捷键Ctrl Alt m 与Qt Creator里的快捷键冲突导致菜单栏被莫名其妙的隐藏 解决方法: 1找到有道词典快捷键 2再次按快捷键 Ctrl Alt m就可以重新显示菜单栏

JavaScript原型继承与面向对象编程思想

原型继承与面向对象编程思想 在JavaScript中&#xff0c;原型(prototype)、构造函数(constructor)和实例对象(instance)是面向对象编程中的重要概念&#xff0c;并且它们之间存在着紧密的关系。 原型(prototype)&#xff1a;原型是JavaScript中对象之间关联的一种机制。每个Ja…

1.0 vue环境安装

1、安装node.js 1.1 下载最新版本Node.js (nodejs.org)Node.js 1.2 开始安装 普通的安装过程&#xff0c;也记录下吧 安装完成&#xff01; 1.3 检查nodejs是否安装成功 代开cmd命令窗口输入 node -v&#xff0c;如果看到了刚才下载的版本号&#xff0c;则表示已经安装成功…

Go 中如何高效遍历目录?探索几种方法

嗨&#xff0c;大家好&#xff01;我是波罗学。本文是系列文章 Go 技巧第十八篇&#xff0c;系列文章查看&#xff1a;Go 语言技巧。 目录遍历是一个很常见的操作&#xff0c;它的使用场景有如文件目录查看&#xff08;最典型的应用如 ls 命令&#xff09;、文件系统清理、日志…

基于yolov5的苹果检测(pytorch框架)【python源码+UI界面+功能源码详解】

功能演示&#xff1a; 基于yolov5的苹果检测系统&#xff0c;系统既能够实现图像检测&#xff0c;也可以进行视屏和摄像实时检测_哔哩哔哩_bilibili &#xff08;一&#xff09;简介 基于yolov5的苹果检测系统是在pytorch框架下实现的&#xff0c;这是一个完整的项目&#xf…

axure9.0 工具使用思考

原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】快速原型设计工具原型设计软件【AxureRP】…

Vue学习之计算属性

模板中的表达式虽然方便&#xff0c;但也只能用来做简单的操作。如果在模板中写太多逻辑&#xff0c;会让模板变得臃肿&#xff0c;难以维护。比如说&#xff0c;我们有这样一个包含嵌套数组的对象&#xff1a; const author reactive({name: John Doe,books: [Vue 2 - Advan…

深入了解负载均衡器

每个负载均衡器都是反向代理&#xff0c;但并非每个反向代理都必须是负载均衡器。 0*GFvXmdPz97bwF8vU.jpeg 问题&#xff1a; OSI模型是什么样的&#xff1f; 1*JzMQUxqHiATuQlIgyIv-xQ.png 问题&#xff1a; 负载均衡器的需求是什么&#xff1f; 答案 → 为了创建一个容错系统…

SpringCloud(16)之SpringCloud OpenFeign和Ribbon

一、Spring Cloud OpenFeign介绍 Feign [feɪn] 译文 伪装。Feign是一个轻量级的Http封装工具对象,大大简化了Http请求,它的使用方法 是定义一个接口&#xff0c;然后在上面添加注解。不需要拼接URL、参数等操作。项目主页&#xff1a;GitHub - OpenFeign/feign: Feign makes w…