RuntimeError:预期的标量类型 Double 但发现 Float

我是 PyTorch 的新手,我从我的 cnn 层收到以下错误:“RuntimeError:预期的标量类型 Double 但发现 Float”。我将每个元素转换为 .astype(np.double) 但错误消息仍然存在。然后在转换 Tensor 后尝试使用 .double() 并再次出现错误消息。这是我的代码以便更好地理解:

import torch.nn as nn
class CNN(nn.Module):

    # Contructor
    def __init__(self, shape):
        super(CNN, self).__init__()
        self.cnn1 = nn.Conv1d(in_channels=shape, out_channels=32, kernel_size=3)
        self.act1 = torch.nn.ReLU()
    # Prediction
    def forward(self, x):
        x = self.cnn1(x)
        x = self.act1(x)
    return x

    X_train_reshaped = np.zeros([X_train.shape[0],int(X_train.shape[1]/depth),depth])

    for i in range(X_train.shape[0]):
        for j in range(X_train.shape[1]): 
            X_train_reshaped[i][int(j/3)][j%3] = X_train[i][j].astype(np.double)

    X_train = torch.tensor(X_train_reshaped)
    y_train = torch.tensor(y_train)

    # Dataset w/o any tranformations
    train_dataset_normal = CustomTensorDataset(tensors=(X_train, y_train), transform=None)
    train_loader = torch.utils.data.DataLoader(train_dataset_normal, shuffle=True, batch_size=16)

    model = CNN(X_train.shape[1]).to(device)

    # Loss and optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters())

    # Train the model
    #how to implement batch_size??
    for epoch in range(epochno):
        #for i, (dataX, labels) in enumerate(X_train_reshaped,y_train):
        for i, (dataX, labels) in enumerate(train_loader):
            dataX = dataX.to(device)
            labels = labels.to(device)

            # Forward pass
            outputs = model(dataX)
            loss = criterion(outputs, labels)

            # Backward and optimize
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            if (i+1) % 100 == 0:
                print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' 
                       .format(epoch+1, num_epochs, i+1, total_step, loss.item()))

以下是我收到的错误:

RuntimeError                              Traceback (most recent call last)
<ipython-input-39-d99b62b3a231> in <module>
     14 
     15         # Forward pass
---> 16         outputs = model(dataX.double())
     17         loss = criterion(outputs, labels)
     18 

~torchnnmodulesmodule.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

<ipython-input-27-7510ac2f1f42> in forward(self, x)
     22     # Prediction
     23     def forward(self, x):
---> 24         x = self.cnn1(x)
     25         x = self.act1(x)

~torchnnmodulesmodule.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~torchnnmodulesconv.py in forward(self, input)
    261 
    262     def forward(self, input: Tensor) -> Tensor:
--> 263         return self._conv_forward(input, self.weight, self.bias)
    264 
    265 

~torchnnmodulesconv.py in _conv_forward(self, input, weight, bias)
    257                             weight, bias, self.stride,
    258                             _single(0), self.dilation, self.groups)
--> 259         return F.conv1d(input, weight, bias, self.stride,
    260                         self.padding, self.dilation, self.groups)
    261 

RuntimeError: expected scalar type Double but found Float
stack overflow RuntimeError: expected scalar type Double but found Float
原文答案
author avatar

接受的答案

我不知道是我还是 Pytorch,但错误消息试图以某种方式转换为浮点数。因此,在 forward pass 中,我通过将 dataX 转换为 float 解决了这个问题,如下所示: outputs = model(dataX.float())


答案:

作者头像

同意aysebilgegunduz。这应该是 Pytorch 的问题,因为我也遇到了同样的错误信息。

只需将类型更改为其他类型即可解决问题。

您可以通过以下方式检查输入张量的类型:

data.type()

更改类型的一些有用功能:

data.float()
data.double()
data.long()

相关问题