Why does BatchNorm1d fail with batch size 1 in training mode?

1 day ago 1
ARTICLE AD BOX

I am training a small PyTorch model and want to use nn.BatchNorm1d.
When the batch size is 1 and the model is in training mode, I get the error below;

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 20])

import torch import torch.nn as nn class BNModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 20) self.bn1 = nn.BatchNorm1d(20) self.fc2 = nn.Linear(20, 1) def forward(self, x): x = self.fc1(x) x = self.bn1(x) x = torch.relu(x) x = self.fc2(x) return x model = BNModel() # batch size = 1 x = torch.randn(1, 10) model.train() print(model(x))

If I increase the batch size to 4, (x = torch.randn(4, 10)), it runs without any error. I also used evaluation model.eval() with batch size 1, and it still runs without error.

I'm struggling with understanding why BatchNorm requires more than one sample in training mode, and what is the correct approach when training with batch size 1?

Read Entire Article