首先,不稳定的训练可能是由于训练数据质量较差或模型过于复杂。因此,建议检查训练数据并确保其格式正确,同时调整模型参数以获得最佳性能。
其次,损失较高但梯度较低可能意味着模型过拟合了,因此可以考虑采用一些常用的正则化方法,如dropout或L1/L2正则化,以减轻模型的过拟合情况。
此外,建议增加训练数据并尝试使用一些先进的优化器算法,如Adam或RMSProp,以加速收敛并提高性能。最后,代码示例如下:
import torch
import torch.nn as nn
from transformers import BertModel
class BERTBinaryClassifier(nn.Module):
def __init__(self, bert_config):
super(BERTBinaryClassifier, self).__init__()
self.bert = BertModel(bert_config)
self.dropout = nn.Dropout(0.1)
self.classifier = nn.Linear(bert_config.hidden_size, 2)
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
pooled_output = outputs['pooler_output']
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
return logits
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = BERTBinaryClassifier.from_pretrained('bert-base-uncased')
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=2e-5)
criterion = nn.CrossEntropyLoss()
for epoch in range(3):
for i, batch in enumerate(train_loader):
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['label'].to(device)
optimizer.zero_grad()
logits = model(input_ids, attention_mask)
loss = criterion(logits.view(-1, 2), labels.view(-1))
loss.backward()
optimizer.step()
if i % 100 == 0:
print(f'Epoch: {epoch}, Step: {i}, Loss: {loss.item()}')