什么是CIFAR-10数据集?
CIFAR-10 是一个包含了10类,60000 张 32x32像素彩色图像的数据集。每类图像有6000张;分为50000张训练数据和10000张测试数据。CIFAR-10 数据网址:http://www.cs.toronto.edu/~kriz/cifar.html
数据集分为5个训练数据集和1个测试数据集,每个批次10000张图像
第一步:下载数据集并加载到内存。图像数据会经过标准化(Normalize)和归一化处理。对数据集进行标准化处理,就是让数据集的均值为0,方差为1,把数据集映射到(-1,1)之间,这样可以加速训练过程,提高模型泛化能力。
归一化将像素值从0~255已经转化为0~1之间,加快训练网络的收敛性。图像的像素处于0-1范围时,由于仍然介于0-255之间,所以图像依旧是有效的,并且可以正常查看图像
import torch
import torchvision # 图像处理工具包
import torchvision.transforms as transforms
N = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=N, shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=N, shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
运行结果:
Using downloaded and verified file: ./data\cifar-10-python.tar.gz
Extracting ./data\cifar-10-python.tar.gz to ./data
Files already downloaded and verified
第二步:随机查看一批图片
import matplotlib.pyplot as plt
import numpy as np
#图像的像素处于0-1范围时,由于仍然介于0-255之间,所以图像依旧是有效的,并且可以正常查看图像
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
print(type(dataiter))
images, labels = dataiter.next()
print(dataiter.next())
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(N)))
第三步:定义卷积神经网络。需要注意的是,开发者必须对图像像素变化负责,要非常清楚图像经过每个神经网络层处理后,输出的像素尺寸,例如,经过一个5x5, stride=1的卷积后,一个32x32输入的图像会变成28x28。
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # 32 -> 28
self.pool1 = nn.MaxPool2d(2) # 28 -> 14
self.conv2 = nn.Conv2d(6, 16, 5) # 14 -> 10
self.pool2 = nn.MaxPool2d(2) # 10 -> 5
self.fc1 = nn.Linear(16*5*5, 120) # 展平
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # 10类
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
输出:
Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
第四步:定义损失函数并训练网络。作为分类应用,选择交叉熵损失函数;优化方案选择adam。
import torch.optim as optim
criterion = nn.CrossEntropyLoss() #分类应用,选择交叉熵损失函数
# torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
optimizer = optim.Adam(net.parameters()) #其余参数默认
for epoch in range(3):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data # inputs类型和尺寸:<class 'torch.Tensor'> torch.Size([N, 3, 32, 32])
optimizer.zero_grad() # 将上一次的梯度值清零
output = net(inputs) # 前向计算forward()
loss = criterion(output, labels) # 计算损失值
loss.backward() # 反向计算backward()
running_loss += loss.item() #累积loss值
optimizer.step() # 更新神经网络参数
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000)) #计算平均loss值
running_loss = 0.0
print('Finished Training')
输出:
[1, 2000] loss: 1.644
[2, 2000] loss: 1.421
[3, 2000] loss: 1.211
Finished Training
第五步 保存训练的模型,Pytorch支持两种保存方式
- 仅保存模型参数
- 保存完整模型(包含参数)
WEIGHT = './cifar_net_weights.pth'
MODEL = './cifar_net_model.pth'
torch.save(net.state_dict(), WEIGHT) # 仅保存模型参数
torch.save(net, MODEL) # 保存整个模型(包含参数)
用netron分别打开模型文件和权重文件可以看到区别
第六步 基于模型文件做推理计算
import torch,torchvision
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # 32 -> 28
self.pool1 = nn.MaxPool2d(2) # 28 -> 14
self.conv2 = nn.Conv2d(6, 16, 5) # 14 -> 10
self.pool2 = nn.MaxPool2d(2) # 10 -> 5
self.fc1 = nn.Linear(16*5*5, 120) # 展平
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # 10类
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
MODEL = './cifar_net_model.pth'
net = torch.load(MODEL)
print(net)
N = 16
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=N, shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
输出结果:
Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
Files already downloaded and verified
Accuracy of the network on the 10000 test images: 58 %
Accuracy of plane : 53 %
Accuracy of car : 70 %
Accuracy of bird : 44 %
Accuracy of cat : 41 %
Accuracy of deer : 42 %
Accuracy of dog : 40 %
Accuracy of frog : 81 %
Accuracy of horse : 66 %
Accuracy of ship : 80 %
Accuracy of truck : 69 %
迷思:加载模型文件,还需要Net类的定义?不符合常理啊!~~
第七步 用GPU加速训练
- net.to(device) # 把网络送入GPU
- inputs, labels = data[0].to(device), data[1].to(device) # 把数据送到GPU
测试下来,GPU训练并没有提升多少速度,是因为本例神经网络很浅很窄。把神经网络加宽加深后,GPU的加速效果就会出来