文字识别,即给定一张文本图片,识别出图中所含文字并输出对应字符串,欢迎使用!
本模型适用于单行文字检测,如需体验通常场景下的多行文字,如标识牌、衣服上文字、多行手写体等,欢迎访问我们的创空间:OFA的中文OCR体验区。
我们还有如下ocr模型欢迎试用:
玩转OFA只需区区以下6行代码,就是如此轻松!如果你觉得还不够方便,请点击右上角Notebook
按钮,我们为你提供了配备了GPU的环境,你只需要在notebook里输入提供的代码,就可以把OFA玩起来了!
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
from modelscope.outputs import OutputKeys
# ModelScope Library >= 1.2.0
ocr_recognize = pipeline(Tasks.ocr_recognition, model='damo/ofa_ocr-recognition_document_base_zh', model_revision='v1.0.1')
result = ocr_recognize('https://xingchen-data.oss-cn-zhangjiakou.aliyuncs.com/maas/ocr/ocr_document_demo.png')
print(result[OutputKeys.TEXT]) # 就是去看个热
OFA(One-For-All)是通用多模态预训练模型,使用简单的序列到序列的学习框架统一模态(跨模态、视觉、语言等模态)和任务(如图片生成、视觉定位、图片描述、图片分类、文本生成等),详见我们发表于ICML 2022的论文:OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework,以及我们的官方Github仓库https://github.com/OFA-Sys/OFA。
Github  |  Paper   |  Blog
OFA在文字识别(ocr recognize)在公开数据集(including RCTW, ReCTS, LSVT, ArT, CTW)中进行评测, 在准确率指标上达到SOTA结果,具体如下:
Model | Scene | Web | Document | Handwriting | Avg |
SAR | 62.5 | 54.3 | 93.8 | 31.4 | 67.3 |
TransOCR | 63.3 | 62.3 | 96.9 | 53.4 | 72.8 |
MaskOCR-base | 73.9 | 74.8 | 99.3 | 63.7 | 80.8 |
OFA-OCR | 82.9 | 81.7 | 99.1 | 69.0 | 86.0 |
本模型训练数据集是复旦大学视觉智能实验室,数据链接:https://github.com/FudanVI/benchmarking-chinese-text-recognition
场景数据集图片采样:
模型及finetune细节请参考OFA Tutorial 1.4节。
import tempfile
from modelscope.msdatasets import MsDataset
from modelscope.metainfo import Trainers
from modelscope.trainers import build_trainer
from modelscope.utils.constant import DownloadMode
train_dataset = MsDataset(MsDataset.load(
'ocr_fudanvi_zh',
subset_name='scene',
namespace='modelscope',
split='train[:100]',
download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS).remap_columns({
'label': 'text'
}))
test_dataset = MsDataset(
MsDataset.load(
'ocr_fudanvi_zh',
subset_name='scene',
namespace='modelscope',
split='test[:20]',
download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS).remap_columns({
'label': 'text'
}))
# 可以在代码修改 configuration 的配置
def cfg_modify_fn(cfg):
cfg.train.hooks = [{
'type': 'CheckpointHook',
'interval': 2
}, {
'type': 'TextLoggerHook',
'interval': 1
}, {
'type': 'IterTimerHook'
}]
cfg.train.max_epochs=2
return cfg
args = dict(
model='damo/ofa_ocr-recognition_document_base_zh',
train_dataset=train_dataset,
eval_dataset=test_dataset,
cfg_modify_fn=cfg_modify_fn,
work_dir = tempfile.TemporaryDirectory().name)
trainer = build_trainer(name=Trainers.ofa, default_args=args)
trainer.train()
训练数据集自身有局限,有可能产生一些偏差,请用户自行评测后决定如何使用。
如果你觉得OFA好用,喜欢我们的工作,欢迎引用:
@article{wang2022ofa,
author = {Junyang Lin and
Xuancheng Ren and
Yichang Zhang and
Gao Liu and
Peng Wang and
An Yang and
Chang Zhou},
title = {Transferring General Multimodal Pretrained Models to Text Recognition},
journal = {CoRR},
volume = {abs/2212.09297},
year = {2022}
}
@article{wang2022ofa,
author = {Peng Wang and
An Yang and
Rui Men and
Junyang Lin and
Shuai Bai and
Zhikang Li and
Jianxin Ma and
Chang Zhou and
Jingren Zhou and
Hongxia Yang},
title = {OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence
Learning Framework},
journal = {CoRR},
volume = {abs/2202.03052},
year = {2022}
}