adaline 自适应线性神经元,是1960年提出来的。
f(x)=の(w*x+b)

用Pytorch实现一下试试。
class Adaline():
def __init__(self, num_features):
self.num_features = num_features
self.weight = torch.zeros(self.num_features, 1, dtype=torch.float)
self.bias = torch.zeros(1, dtype=torch.float)
def forward(self, x):
netinputs = torch.add(torch.mm(x, self.weight), self.bias)
activations = netinputs
return activations.view(-1)
def backward(self, x, yhat, y):
grad_loss_yhat = 2*(y - yhat)
grad_yhat_weights = -x
grad_yhat_bias = -1.
grad_loss_weights = torch.mm(grad_yhat_weights.t(),
grad_loss_yhat.view(-1, 1)) / y.size(0)
grad_loss_bias = torch.sum(grad_yhat_bias*grad_loss_yhat) / y.size(0)
# return negative gradient
return (-1)*grad_loss_weights, (-1)*grad_loss_bias
主要实现了前向传播和反向传播。
前向计算中:
netinputs = torch.add(torch.mm(x, self.weight), self.bias)
对应着: w*x +b 然后转换为1维返回。
反向传播中:
主要是计算loss
完整代码:https://github.com/xxg1413/MatrixSlow/tree/master/example/ch01