Custom nodes to run microsoft/Phi models.
Custom ComfyUI nodes to run Microsoft's Phi models. Supported versions:
Download the model files from the link above and place them in their corresponding folders like this:
.\ComfyUI\models\microsoft\Phi-3.5-mini-instruct\
.\ComfyUI\models\microsoft\Phi-3.5-vision-instruct\
.\ComfyUI\models\microsoft\Phi-4-multimodal-instruct\
You can download the files with the followng commands:
# Got to Microsoft models folder
cd .\ComfyUI\models\microsoft
git clone https://huggingface.co/microsoft/Phi-3.5-mini-instruct
git clone https://huggingface.co/microsoft/Phi-3.5-vision-instruct
git clone https://huggingface.co/microsoft/Phi-4-multimodal-instruct
Go to the ComfyUI folder .\ComfyUI\custom_nodes
, clone this repository and install Python dependencies:
# Clone repo
git clone https://github.com/alexisrolland/ComfyUI-Phi.git
# Install dependencies
..\..\python_embeded\python.exe -s -m pip install -r .\ComfyUI-Phi\requirements.txt
# For Windows users download Flash Attention wheel for Python 3.12
https://huggingface.co/lldacing/flash-attention-windows-wheel/resolve/main/flash_attn-2.7.4%2Bcu126torch2.6.0cxx11abiFALSE-cp312-cp312-win_amd64.whl
# Install Flash Attention
..\..\python_embeded\python.exe -s -m pip install flash_attn-2.7.4%2Bcu126torch2.6.0cxx11abiFALSE-cp312-cp312-win_amd64.whl
4.0.0
: Add support for Phi-4-multimodal-instruct.3.0.0
: Enforce manual download of model files for cleaner file organization.2.0.0
: This major version introduces new inputs to provide a pair of image and response examples to the node Run Phi Vision.Drag and drop the image in ComfyUI to reload the workflow.