卷积神经网络无法绕开的大神——LeNet
- 1. 基本架构
- 2. LeNet 5
- 3. LeNet 5 代码
1. 基本架构

2. LeNet 5

- LeNet 5: 5 表示的是5个核心层,2个卷积层,3个全连接层.
- 核心权重层:卷积层、全连接层、循环层,Batchnorm / Dropout 这些都属于附属层。
- Convolutions, 32×32 → 28×28:卷积过后,图像像素损失了4个,是因为 kernal_size是5×5. 那个年代是不补零的。
- Subsampling: 亚采样,也叫池化层,池化一次,图像大小缩小一般,层数不变。
- 卷积负责把图像层数变得越来越多,池化负责把图像变得越来越小。最后使用全连接,输出类别。
3. LeNet 5 代码
import torch
from torch import nnclass ConvBlock(nn.Module):"""一层卷积:- 卷积层- 批规范化层- 激活层"""def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1):super().__init__()self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,kernel_size=kernel_size, stride=stride,padding=padding)self.bn = nn.BatchNorm2d(num_features=out_channels)self.relu = nn.ReLU()def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.relu(x)return xclass LeNet(nn.Module):def __init__(self):super().__init__()self.feature_extractor = nn.Sequential(ConvBlock(in_channels=1, out_channels=6, kernel_size=5,stride=1,padding=0),nn.MaxPool2d(kernel_size=2, stride=2, padding=0),ConvBlock(in_channels=6, out_channels=16, kernel_size=5,stride=1,padding=0),nn.MaxPool2d(kernel_size=2, stride=2, padding=0),)self.classifier = nn.Sequential(nn.Flatten(),nn.Linear(in_features=400, out_features=120),nn.ReLU(),nn.Linear(in_features=120, out_features=84),nn.ReLU(),nn.Linear(in_features=84, out_features=10))def forward(self, x):x = self.feature_extractor(x)x = self.classifier(x)return xif __name__ == "__main__":model = LeNet()print(model)x = torch.randn(1, 1, 32, 32)y = model(x)print(y.shape)