文本生成3D模型_shap-e
OpenAI的Shap-E是一款先进的文本到3D模型人工智能工具,它将颠覆我们创建和交互3D对象的方式。Shap-E的核心是一个基于神经网络的深度学习模型,它可以从大量的文本数据中学习到三维形状的结构和特征,然后生成相应的3D模型。相比于传统的基于点云的显式生成模型Point-E,Shap-E不再需
  • 模型资讯
  • 模型资料
该模型当前使用的是默认介绍模版,处于“预发布”阶段,页面仅限所有者可见。
请根据模型贡献文档说明,及时完善模型卡片内容。ModelScope平台将在模型卡片完善后展示。谢谢您的理解。

Clone with HTTP

 git clone https://www.modelscope.cn/Lvcoco/Text.To.3D.Model_shap-e.git

Shap-E

This is the official code and model release for Shap-E: Generating Conditional 3D Implicit Functions.

  • See Usage for guidance on how to use this repository.
  • See Samples for examples of what our text-conditional model can generate.

Samples

Here are some highlighted samples from our text-conditional model. For random samples on selected prompts, see samples.md.

A chair that looks like an avocado An airplane that looks like a banana A spaceship
A chair that looks
like an avocado
An airplane that looks
like a banana
A spaceship
A birthday cupcake A chair that looks like a tree A green boot
A birthday cupcake A chair that looks
like a tree
A green boot
A penguin Ube ice cream cone A bowl of vegetables
A penguin Ube ice cream cone A bowl of vegetables

Usage

Install with pip install -e ..

To get started with examples, see the following notebooks:

  • sample_text_to_3d.ipynb - sample a 3D model, conditioned on a text prompt.
  • sample_image_to_3d.ipynb - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
  • encode_model.ipynb - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable BLENDER_PATH to the path of the Blender executable.