Pytorch实现线性回归

1.前置公式

线性模型 ŷ = x * w + b
损失函数 loss = (ŷ - y)2
权重更新 w = w - ɑ*d(loss)/dw
反向传播

构建神经网络的过程就是在构建一个多重的计算图。

image-20240828144140096

2.pytorch实现线性回归

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import torch

x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1) #输入维度是1,输出维度是1

def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearModel()

criterion = torch.nn.MSELoss(reduction='sum') #不对batch里的loss求平均
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) #对w和b取出更新,学习率0.01

for epoch in range(1000):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss.item()) #item取标量,否则取出的是一个计算图

optimizer.zero_grad()
loss.backward()
optimizer.step()

print("w=", model.linear.weight.item())
print("b=", model.linear.bias.item())

x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print("y_pred=", y_test.data)

image-20240819152544564

上次更新 2024-09-01