+
95
-

huggingface transformers pipeline如何训练自定义数据微调模型?

huggingface transformers pipeline如何训练自定义数据微调模型?


网友回复

+
15
-

from transformers import pipeline, set_seed

# 加载GPT-2模型
model_name = "gpt2"
nlp = pipeline("text-generation", model=model_name)

# 定义微调数据
train_data = [
    {"text": "This is the first sentence. ", "new_text": "This is the first sentence. This is the second sentence. "},
    {"text": "Another example. ", "new_text"...

点击查看剩余70%

+
15
-

transformers API参考链接:https://huggingface.co/docs/transformers/v4.21.2/en/training

train.py

from datasets import load_dataset
from transformers import AutoTokenizer,AutoConfig
from transformers import DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
import os
import json

#from datasets import load_metric

os.environ["CUDA_VISIBLE_DEVICES"]= "1,2,3,4,5,6,7"

# 加载数据集(训练数据、测试数据)
dataset = load_dataset("csv", data_files={"train": "./weibo_train.csv", "test": "./weibo_test.csv"}, cache_dir="./cache")
dataset = dataset.class_encode_column("label") #对标签类进行编码,此过程对训练集的标签进行汇总

# 利用加载的数据集,对label进行编号,生成label_map,以便于训练、及后续的推理、计算准确率等
def generate_label_map(dataset):
    labels=dataset['train'].features['label'].names
    label2id=dict()
    for idx,label in enumerate(labels):
        label2id[label]=idx
    return label2id

def save_label_map(dataset,label_map_file):
    # only take the labels of the training data for the label set of the model.
    label2id=generate_label_map(dataset)
    with open(label_map_file,'w',encoding='utf-8') as fout:
        json.dump(label2id,fout)

# 保存label map
label_map_file='label2id.json'
save_label_map(dataset,label_map_file)

# 读取label map【注意,在多卡训练时,这种读取文件的方法可能会导致报错】
#label2id={}
#with open(label_map_fi...

点击查看剩余70%

我知道答案,我要回答