燃灯-T5-784M-问答-中文
首个中文的生成式问答模型。它基于T5-Large结构,使用悟道180G语料在封神框架进行预训练,在翻译的中文SQuAD和CMRC2018两个阅读理解数据集上进行微调。输入一篇文章和一个问题,可以生成准确流畅的回答。
  • 模型资讯
  • 模型资料

Randeng-T5-784M-QA-Chinese

T5 for Chinese Question Answering

简介 Brief Introduction

This T5-Large model, is the first pretrained generative question answering model for Chinese in huggingface. It was pretrained on the Wudao 180G corpus, and finetuned on Chinese SQuAD and CMRC2018 dataset. It can produce a fluent and accurate answer given a passage and question.

这是中文的生成式问答模型。它基于T5-Large结构,使用悟道180G语料在封神框架进行预训练,在翻译的中文SQuAD和CMRC2018两个阅读理解数据集上进行微调。输入一篇文章和一个问题,可以生成准确流畅的回答。

模型类别 Model Taxonomy

需求 Demand 任务 Task 系列 Series 模型 Model 参数 Parameter 额外 Extra
通用 General 自然语言转换 NLT 燃灯 Randeng T5 784M 中文生成式问答 -Chinese Generative Question Answering

模型表现 Performance

CMRC 2018 dev (Original span prediction task, we cast it as a generative QA task)

CMRC 2018的测试集上的效果(原始任务是一个起始和结束预测问题,这里作为一个生成回答的问题)

model Contain Answer Rate RougeL BLEU-4 F1 EM
Ours 76.0 82.7 61.1 77.9 57.1
MacBERT-Large(SOTA) - - - 88.9 70.0

Our model enjoys a high level of generation quality and accuracy, with 76% of generated answers containing the ground truth. The high RougeL and BLEU-4 reveal the overlap between generated results and ground truth. Our model has a lower EM because it generates complete sentences while golden answers are segmentations of sentences.
P.S.The SOTA model only predicts the start and end tag as an extractive MRC task.

我们的模型有着极高的生成质量和准确率,76%的回答包含了正确答案(Contain Answer Rate)。RougeL和BLEU-4反映了模型预测结果和标准答案重合的程度。我们的模型EM值较低,因为生成的大部分为完整的句子,而标准答案通常是句子片段。
P.S. SOTA模型只需预测起始和结束位置,这种抽取式阅读理解任务比生成式的简单很多。

样例 Cases

Here are random picked samples:
example1

pred: in picture are generated results,target indicates groud truth.

If the picture fails to display, you can find the picture in Files and versions.

使用 Usage

from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

pipeline_ins = pipeline(
        'text2text-generation',
        model='Fengshenbang/Randeng-T5-784M-QA-Chinese',
        model_revision='v1.0.0'
)

print(pipeline_ins('question:美国建筑师是怎样创造维多利亚哥特式建筑的?'))

引用 Citation

如果您在您的工作中使用了我们的模型,可以引用我们的论文

If you are using the resource for your work, please cite the our paper:

@article{fengshenbang,
  author    = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
  title     = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
  journal   = {CoRR},
  volume    = {abs/2209.02970},
  year      = {2022}
}

You can also cite our website:

欢迎引用我们的网站:

@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2021},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}