输入一段包含人物的视频,实现端到端的人体关键点检测,输出视频中每一帧图像人体的17点人体3D关键点坐标。
HDFormer | CannoicalPose3D |
该任务是单目相机下的3D人体关键点检测框架,通过端对端的快速推理,可以得到视频中的人体3D关键点坐标。其中2D人体检测基于2D人体关键点检测模型.
本模型HDFormer是一种U-Shape结构的3D人体姿态估计模型,包含3个不同的stage:下采样阶段、上采样阶段和合并阶段。本模型结合了joint<->joint, bone<->joint 和 hyperbone<->joint的特征交互。HDFormer相比于Transformer结构的3D人体姿态估计模型,具有更高的预测精度和推理效率。
HDFormer结构主要包含High-order Directed Tranformer block。该结构中主要包含First-order Attention block, Hyperbone Representation和Cross-attention三种结构。其中First-order Attention block用来提取所有关节点之间的空间结构信息;Hyperbone Representation用来描述不同阶数的人体结构特征;Cross-attention用来提取关节点与高阶骨骼特征之间的关系。
使用范围:
应用场景:
在ModelScope框架上,提供输入视频,即可通过简单的Pipeline调用来完成人体关键点检测任务。暂不支持CPU。
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
model_id = 'damo/cv_hdformer_body-3d-keypoints_video'
body_3d_keypoints = pipeline(Tasks.body_3d_keypoints, model=model_id)
output = body_3d_keypoints('https://modelscope.oss-cn-beijing.aliyuncs.com/test/videos/Walking.54138969.mp4')
print(output)
configuration.json
里面指定的测试视频中使用的帧数:model.INPUT.max_frame
;configuration.json
里面指定render
字段中的方位角和偏转角;{
"keypoints": [ // 相机坐标系下的3D姿态关键点坐标
[[x, y, z]*17], // 每行为一帧图片的预测结果
[[x, y, z]*17],
...
],
"timestamps": [ // 每一帧测试视频对应的时间戳
"00:00:0.23",
"00:00:0.56",
"00:00:0.69",
...
],
"output_video": "xxx" // 渲染推理结果的视频二进制文件数据,可选,取决于模型配置文件中是否配置"render"字段。
}
configuration.json
中的 model.INPUT
字段进行定义。训练数据:Human3.6M。
MPJPE(mm) | 2d_gt(T=96) | cpn(T=96) | hrnet(T=96) |
---|---|---|---|
HDFormer | 21.6 | 42.6 | 40.3 |
PCK[↑] | AUC[↑] | MPJPE[↓] | |
---|---|---|---|
HDFormer | 98.7% | 72.9% | 37.2mm |
Method | MPJPE[↓] | Params | Latency | Frames |
---|---|---|---|---|
U-CondDGCN[^u_conddgcn] | 22.7 | 3.4M | 0.6ms | 96 |
MixSTE[^mixste] | 25.9 | 33.7M | 2.6ms | 81 |
MixSTE | 21.6 | 33.8M | 8.0ms | 243 |
HDFormer | 21.6 | 3.7M | 1.3ms | 96 |
测试环境为V100 16GB GPU。
[^mixste]: Jinlu Zhang, Zhigang Tu, Jianyu Yang, Yujin Chen, and Junsong Yuan. Mixste: Seq2seq mixed spatio-temporal encoder for 3d human pose estimation in video. In IEEE Conference on Computer Vision and Pat- tern Recognition, (CVPR), pages 13222–13232, 2022.
[^u_conddgcn]: Wenbo Hu, Changgong Zhang, Fangneng Zhan, Lei Zhang, and Tien-Tsin Wong. U-CondDGCN: Conditional Directed Graph Convolution for 3D Human Pose Estimation. arXiv, 2021
输出的3D关键点可视化结果如下:
@article{h36m_pami,
author = {Ionescu, Catalin and Papava, Dragos and Olaru, Vlad and Sminchisescu, Cristian},
title = {Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments},
journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence},
publisher = {IEEE Computer Society},
year = {2014}
}
@article{chen2023-hdformer,
title = {HDFormer: High-order Directed Transformer for 3D Human Pose Estimation},
author = {Chen, Hanyuan and He, Jun-Yan and Xiang, Wangmeng and Liu, Wei and Cheng, Zhi-Qi and Liu, Hanbing and Luo, Bin and Geng, Yifeng and Xie, Xuansong},
year = {2023},
eprint = {2302.01825},
doi = {10.48550/arXiv.2302.01825},
}