You can use TRELLIS in comfyUI a/TRELLIS, Structured 3D Latents for Scalable and Versatile 3D Generation
TRELLIS, Structured 3D Latents for Scalable and Versatile 3D Generation
previous update
根据TRELLIS的更新,新增三视参考图渲染模式(示例图为实际效果,此模式速度更快)和高斯保存按钮,针对三视参考图新增图片加载节点(内置正方形裁切),三视图模式开启multi_image生效,否则是常规的单图模式(会输出三个结果);
如果你输入的图片不是纯色背景,建议开启preprocess_image以获得最好的效果。(此次更新也修复了加载RGBA图片可能导致的变形错误);
According to TRELLIS' update, a three view reference image rendering mode (the example image is the actual effect, which is faster) and a Gaussian save button have been added. For the three view reference image, a new image loading node (with built-in square cropping) has been added,Enabling multi_image in three view mode takes effect, otherwise it will be in regular single image mode (outputting three results);
If the image you input is not a solid color background, it is recommended to enable 'preprocess_image' for the best effect. (This update also fixes deformation errors that may occur when loading RGBA images);
使用 here @planb788 方法,我制作了python3.11,torch2.5.1 cu124.的便携包,可以在Google dirver 或者 夸克网盘下载,注意,即便是便携包,也是需要配置VS和python的系统变量路径的;
Use here , I have created a portable package for ‘Python 3.11, Torch 2.5.1, and CU124’ using the @planb788 method, which can be found on Google dirver Or 夸克网盘 Download, note that even portable packages require configuring the system variable paths for VS and Python;
增加批量渲染功能,注意过多图片可能会OOM;
Add batch rendering function, be aware that too many images may cause OOM
In the ./ComfyUI /custom_node directory, run the following:
git clone https://github.com/smthemex/ComfyUI_TRELLIS.git
本插件的测试环境是python3.11,torch2.5.1 cu124...
The testing environment for this node is Python 3.11, torch2.5.1 cu124...
pip install -r requirements.txt
以下必须要安装成功,否则无法运行!!!
以下示例是按torch2.5.1 and cu124安装,你可以改成你当前环境的cu和torch,源于issue3
The following must be installed successfully, otherwise it cannot run !!!
Example for torch2.5.1 and cu124,you can change to torch2.4.0 or other from issue3
xformers 和 flash-attention 可以只安装一项
xformers and Flash Attention can be installed with only one option
pip install https://github.com/bdashore3/flash-attention/releases/download/v2.7.1.post1/flash_attn-2.7.1.post1+cu124torch2.5.1cxx11abiFALSE-cp310-cp310-win_amd64.whl
pip install kaolin -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.5.1_cu124.html
git clone https://github.com/NVlabs/nvdiffrast.git ./tmp/extensions/nvdiffrast
pip install ./tmp/extensions/nvdiffrast
#if install nvdiffrast error ,see below how to fix it
git clone --recurse-submodules https://github.com/JeffreyXiang/diffoctreerast.git ./tmp/extensions/diffoctreerast
pip install ./tmp/extensions/diffoctreerast
git clone https://github.com/autonomousvision/mip-splatting.git ./tmp/extensions/mip-splatting
pip install ./tmp/extensions/mip-splatting/submodules/diff-gaussian-rasterization/
pip install spconv-cu120 #if cuda>120
# pip install spconv-cu118 # if cuda118
# 在..ComfyUI_TRELLIS目录下,复制vox2seq插件目录到temp,然后pip安装 Under ComfyUI_TRELLIS directory,Copy the Vox2seq plugin to temp and install it using pip
cp -r ./extensions/vox2seq ./tmp/extensions/vox2seq
pip install ./tmp/extensions/vox2seq
2.1 Other Need
2.2 visualstudio & cuda
Path: C:\Program Files(x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.28610\bin\Hostx64\x64 # or other version 或者其他版本
Path: C:\Program Files(x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.28610\bin\Hostx64\x64\cl.exe # or other version 或者其他版本
Path: C:\Users\yourname\AppData\Roaming\Python\Python311\Scripts # python
CUDA_PATH: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4 # or other version 或者其他版本
2.3 if use glb2fbx
Need ' pip install bpy ' and install ' blender '
├── anypath/JeffreyXiang/TRELLIS-image-large/
| ├── pipeline.json
| ├── ckpts/
| ├── slat_dec_gs_swin8_B_64l8gs32_fp16.json
| ├── slat_dec_gs_swin8_B_64l8gs32_fp16.safetensors
| ├── slat_dec_mesh_swin8_B_64l8m256c_fp16.json
| ├── slat_dec_mesh_swin8_B_64l8m256c_fp16.safetensors
| ├── slat_dec_rf_swin8_B_64l8r16_fp16.json
| ├── slat_dec_rf_swin8_B_64l8r16_fp16.safetensors
| ├── slat_enc_swin8_B_64l8_fp16.json
| ├── slat_enc_swin8_B_64l8_fp16.safetensors
| ├── slat_flow_img_dit_L_64l8p2_fp16.json
| ├── slat_flow_img_dit_L_64l8p2_fp16.safetensors
| ├── ss_dec_conv3d_16l8_fp16.json
| ├── ss_dec_conv3d_16l8_fp16.safetensors
| ├── ss_enc_conv3d_16l8_fp16.json
| ├── ss_enc_conv3d_16l8_fp16.safetensors
| ├── ss_flow_img_dit_L_16l8_fp16.json
| ├── ss_flow_img_dit_L_16l8_fp16.safetensors
├── ComfyUI/models/dinov2
| ├── dinov2_vitl14_reg4_pretrain.pth
microsoft/TRELLIS
@article{xiang2024structured,
title = {Structured 3D Latents for Scalable and Versatile 3D Generation},
author = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},
journal = {arXiv preprint arXiv:2412.01506},
year = {2024}
}
facebookresearch/dinov2
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
@misc{darcet2023vitneedreg,
title={Vision Transformers Need Registers},
author={Darcet, Timothée and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv:2309.16588},
year={2023}
}