huggingface transformers pipeline如何训练自定义数据微调模型?
网友回复
from transformers import pipeline, set_seed
# 加载GPT-2模型
model_name = "gpt2"
nlp = pipeline("text-generation", model=model_name)
# 定义微调数据
train_data = [
{"text": "This is the first sentence. ", "new_text": "This is the first sentence. This is the second sentence. "},
{"text": "Another example. ", "new_text"...点击查看剩余70%
transformers API参考链接:https://huggingface.co/docs/transformers/v4.21.2/en/training
train.py
from datasets import load_dataset
from transformers import AutoTokenizer,AutoConfig
from transformers import DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
import os
import json
#from datasets import load_metric
os.environ["CUDA_VISIBLE_DEVICES"]= "1,2,3,4,5,6,7"
# 加载数据集(训练数据、测试数据)
dataset = load_dataset("csv", data_files={"train": "./weibo_train.csv", "test": "./weibo_test.csv"}, cache_dir="./cache")
dataset = dataset.class_encode_column("label") #对标签类进行编码,此过程对训练集的标签进行汇总
# 利用加载的数据集,对label进行编号,生成label_map,以便于训练、及后续的推理、计算准确率等
def generate_label_map(dataset):
labels=dataset['train'].features['label'].names
label2id=dict()
for idx,label in enumerate(labels):
label2id[label]=idx
return label2id
def save_label_map(dataset,label_map_file):
# only take the labels of the training data for the label set of the model.
label2id=generate_label_map(dataset)
with open(label_map_file,'w',encoding='utf-8') as fout:
json.dump(label2id,fout)
# 保存label map
label_map_file='label2id.json'
save_label_map(dataset,label_map_file)
# 读取label map【注意,在多卡训练时,这种读取文件的方法可能会导致报错】
#label2id={}
#with open(label_map_fi...点击查看剩余70%
同一个中英混合文本不同大模型计算tokens长度一致吗?
Browser Use / Playwright / Puppeteer 与Chrome DevTools Protocol(CDP)的关系?
能否在三维空间调用ai的api实现vrm模型执行任意的姿势动作与行走完成任务?
如何让openclaw小龙虾自动帮你打电话聊客户?
各大公司推出的claw是否是为了大家消费自己的大模型tokens?
云服务器什么配置才能部署openclaw?
为啥ai生成视频模型只能5秒10秒或15秒生成,不能一次生成1分钟1i小时呢?
技术上如何解决被曝光的ai投毒geo行为?
有没有哪个大模型可以根据声音和文字描述生成带声音的视频?
如何实现华为手机终端设备之间隔空握拳抓取传送信息?


