BertForMaskedLM是一种基于BERT框架的模型,用于预测掩码的标记。其正确的返回值为一个元组,包含两个张量,分别是预测标记的概率分布和对应的标记ID。
代码示例:
from transformers import BertTokenizer, BertForMaskedLM
# 加载模型和tokenizer
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# 设置待预测文本和掩码的位置
text = "I have a [MASK] and a cat."
masked_index = 4
# 将文本转换为输入格式
tokenized_text = tokenizer.tokenize(text)
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
# 将数据转换为张量并进行预测
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0][0][masked_index].topk(5)
# 输出预测结果及对应的ID
print("Top 5 predictions:")
for i in range(5):
predicted_index = predictions.indices[i].item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print("{:2d}: {:20}".format(i+1, predicted_token))