ARTICLE AD BOX
PyTorch registers parameters only if layers are defined inside __init__. The error occurs because your model has no registered parameters when the optimizer is created.
In your code, self.linear is initialized as None (when input_dim=None) and only created inside forward(). But the optimizer is built using:
optim.SGD(model.parameters(), lr=0.001)
At that time, self.linear does not exist yet → so model.parameters() is empty → hence:
ValueError: optimizer got an empty parameter list
To fix this issue, you must have to initialize the layer inside init, not inside forward():
class LogisticRegressionModel(nn.Module): def __init__(self, input_dim): super().__init__() torch.manual_seed(9) self.linear = nn.Linear(input_dim, 1) def forward(self, X): return self.linear(X) # use BCEWithLogitsLoss (no sigmoid)Last point, since you're using nn.BCEWithLogitsLoss(), remove torch.sigmoid() from forward(). BCEWithLogitsLoss already applies sigmoid internally.
