A package designed to enable multi-regional prompting for architectural rendering, integrated with the Rhino Pseudorandom plugin.
{
"width": 0,
"height": 0,
"pmts_environment":
{
"pmt_scene" : "a prompt describing the scene as a whole",
"pmt_style" : "a prompt describing the rendering style",
"pmt_negative" : "a prompt describing what should not be in the render",
},
"map_semantic":
[
{
"pmt_txt": "an text prompt for the object, MAY BE NULL or EMPTY",
"pmt_img": "BASE 64 ENCODED BITMAP of a guidence image, MAY BE NULL OR EMPTY",
"mask": "BASE 64 ENCODED BITMAP",
"pct": 0.00
},
{
"pmt_txt": "an text prompt for the object, MAY BE NULL or EMPTY",
"pmt_img": "BASE 64 ENCODED BITMAP of a guidence image, MAY BE NULL OR EMPTY",
"mask": "BASE 64 ENCODED BITMAP",
"pct": 0.00
},
...
],
"img_depth": "BASE 64 ENCODED IMAGE",
"img_edge": "optional BASE 64 ENCODED IMAGE"
"pseudorandom_spatial_package_version": "{{schema version that this package adheres to in x.xx format}}",
}
IMPORTANT: If you're using the Windows portable version of ComfyUI, which includes an embedded Python environment, you will first need to install the diffusers
module:
diffusers
library for ComfyUI (Windows Portable Version)Locate your main ComfyUI_windows_portable
directory.
Navigate to the ComfyUI Windows Portable
Directory via a Command Prompt:
Win + R
, or typing cmd
, and pressing Enter
.cd
command to change the current directory to the ComfyUI_windows_portable
directory. For example:
cd C:\path\to\your\ComfyUI_windows_portable
Install the library using the embedded Python interpreter:
ComfyUI_windows_portable
directory, run the following command:
.\python_embeded\python.exe -m pip install diffusers
it should look like this:
\ComfyUI_windows_portable>.\python_embeded\python.exe -m pip install diffusers
NOT Windows Portable Version:
pip install diffusers
These steps will ensure that the diffusers module is installed within the embedded Python environment used by ComfyUI.
There are many places you can download depth ControlNet models, but the one we recommend is diffusion_pytorch_model.safetensors
from Hugging Face.
To do this:
Download the model:
diffusion_pytorch_model.safetensors
file from the link above (https://huggingface.co/lllyasviel/sd-controlnet-depth/tree/main).Move the model to the controlnet folder:
controlnet
subfolder inside the models
folder of your ComfyUI directory:
C:\path\to\your\ComfyUI_windows_portable\ComfyUI\models\controlnet
depth
.Manager > Model Manager (Install Models)
Manager > Custom Nodes Manager (Install Custom Nodes)
Install
on the one called ComfyUI_IPAdapter_plus
by cubiq
/ComfyUI/models/clip_vision
part:
...\models\clip_vision
C:\path\to\your\ComfyUI_windows_portable\ComfyUI\models
models
, create a new folder called ipadapter
.safetensors
files of your choice depending on the preferred SD version
ip-adapter-plus_sd15.safetensors
, "Plus model, very strong" versionip-adapter-plus_sdxl_vit-h.safetensors
, "SDXL plus model" version...\models\ipadapter
We recommend:
Clone this repo inside your custom_nodes
folder by:
C:\path\to\your\ComfyUI_windows_portable\ComfyUI\custom_nodes
git clone https://github.com/Pseudotools/Pseudocomfy.git
This project incorporates code that has been adapted or directly copied from the following open-source packages:
ComfyUI_densediffusion package by Chenlei Hu ("huchenlei")
Original source: https://github.com/huchenlei/ComfyUI_densediffusion
ComfyUI_IPAdapter_plus package by Matteo Spinelli ("Matt3o/Cubiq")
Original source: https://github.com/cubiq/ComfyUI_IPAdapter_plus
ComfyUI-Impact-Pack package by Dr.Lt.Data ("ltdrdata")
Original source: https://github.com/ltdrdata/ComfyUI-Impact-Pack
The code from these packages is used under the terms of their respective licenses, with modifications made to fit the specific requirements of this project. The contributions of these developers are greatly appreciated, and their work has been instrumental in the development of this project.