ComfyUI Extension: FaceCanon — Consistent Faces at Any Resolution
FaceCanon scales a detected face to a canonical pixel size, lets you run your favorite face detailer at that sweet spot, then maps the result back into the original image with seamless blending. The payoff is consistent face style no matter the input resolution or framing.
Custom Nodes (0)
README
FaceCanon — Consistent Faces at Any Resolution (ComfyUI)
Normalize → Detail → Inverse Composite.
FaceCanon scales a detected face to a canonical pixel size, lets you run your favorite face detailer at that sweet spot, then maps the result back into the original image with seamless blending. The payoff is consistent face style no matter the input resolution or framing.
🔍 Before vs After Comparison
Here's a side-by-side comparison showing the difference FaceCanon makes at 768x768 resolution when combined with FaceDetailer:
Notice how much the quality and artstyle of the the bottom-left render has deteriorated
✨ What it does
- Canonical face scale: Locks the face to a target pixel size (e.g. 256 px) before enhancement.
- Works with any detailer: Drop in FaceDetailer, CodeFormer, GPEN, etc.
- Seamless paste-back: Inverse transform + feather or Poisson blending.
- Multi-face aware: Choose largest face or a specific index.
- Resolution-agnostic: Stable results across 1:1, portrait, landscape, SDXL, FLUX, etc.
🎯 When to Use FaceCanon
Most Useful For:
LoRA Model Mixers 🎨 - If you blend multiple artstyle LoRA models, FaceCanon is essential. Without it, face enhancement can create unstable, inconsistent artstyles as different models interpret faces differently at varying resolutions.
Still Helpful For:
Single Model Users 📸 - Even with one consistent model, FaceCanon improves face quality consistency across different input resolutions and framings. The improvement is noticeable but less dramatic than with model mixing.
⚙️ Requirements
- ComfyUI
- ComfyUI-Impact-Pack (for Ultralytics/ONNX detectors)
We do not bundle detector weights. See Models below.
📦 Install
cd ComfyUI/custom_nodes
git clone https://github.com/SiggEye/FaceCanon.git
pip install -r FaceCanon/requirements.txt
# restart ComfyUI
🧠 Models (Detectors)
FaceCanon requires a face detection model that you must download yourself.
Download Face Detection Model
Recommended model: face_yolov8n.pt
Installation:
- Download the model file from the link above
- Place it in your ComfyUI models directory:
ComfyUI/models/ultralytics/bbox/face_yolov8n.pt
Setup in ComfyUI
- In your workflow, create an
UltralyticsDetectorProvider
node - Select
face_yolov8n.pt
as the model - Connect the
BBOX_DETECTOR
output (green) to FaceCanon'sbbox_detector
input
Note: You only need the
BBOX_DETECTOR
output (green). TheSEGM_DETECTOR
output (red) can be left unused.
🧩 Nodes
FaceScaleNormalize
Detects a face and warps the source image into a canonical canvas.
Inputs:
image
- Input image tensorbbox_detector
- Face detector from Impact-Pack (required)base_size
- Canvas size (default: 1024, divisible by 16)target_face_px
- Target face size in pixels (default: 256)face_selection
-"largest"
or"index"
(default:"largest"
)face_index
- Face index when using"index"
selection (default: 0)center_face
- Center face on canvas (default:true
)oversize_strategy
-"downscale_to_fit"
or"center_crop"
pad_mode
-"edge"
|"reflect"
|"constant"
(+pad_color
if constant)pad_color
- RGB color for constant padding (default:"128,128,128"
)
Outputs:
normalized_image
- Image normalized to canonical dimensionstransform_meta
- JSON metadata for inverse transform
FaceScaleDenormalizeComposite
Maps the edited face back into the original image with seamless blending.
Inputs:
original_image
- The original input imageedited_normalized
- Enhanced face from your detailer/enhancertransform_meta
- Transform metadata from FaceScaleNormalizemask_expand_px
- Pixels to expand face mask (default: 32)feather_px
- Gaussian blur radius for blending (default: 32)blend_mode
-"feather"
or"poisson"
(default:"feather"
)use_ellipse_mask
- Use elliptical mask for natural blending (default:true
)
Output:
composited_image
- Final result with enhanced face blended back
🔄 Basic Workflow
[Image] → UltralyticsDetectorProvider → FaceScaleNormalize
├─→ (normalized_image) → FaceDetailer → FaceScaleDenormalizeComposite → [Final]
└─→ (transform_meta) ─────────────────────────────────────────────────┘
Quick Setup:
- Drop
UltralyticsDetectorProvider
, select your face model - Set
base_size
(e.g., 1024) andtarget_face_px
(e.g., 256) on FaceScaleNormalize - Pipe
normalized_image
into your FaceDetailer (or any enhancer) - Feed
edited_normalized
+transform_meta
+original_image
to FaceScaleDenormalizeComposite - Tweak
feather_px
or switch topoisson
if you see seams
📸 Example Workflow
Here's a visual comparison of a ComfyUI FaceDetailer setup without (top) and with (bottom) FaceCanon:
✅ Recommended Defaults
| Parameter | Value | Notes |
|-----------|-------|-------|
| base_size
| 1024 | Divisible by 16, good for SD models |
| target_face_px
| 256 | Optimal face size for most detailers |
| pad_mode
| "edge"
| Good for uniform backgrounds |
| blend_mode
| "feather"
| Fast, usually looks great |
| feather_px
| 32 | Good balance of sharpness/blending |
| mask_expand_px
| 32 | Prevents visible edges |
💡 Pro Tip: If your best results happen when faces are ~200px on a 1024×1024 canvas, try:
base_size: 1024
,target_face_px: 200
👥 Multi-Face Support
Quick single face: Set face_selection: "largest"
Specific faces: Set face_selection: "index"
and run separate branches with:
face_index: 0
(first detected face)face_index: 1
(second detected face)- etc.
🧐 FAQ
Q: Isn't this just setting guide_size
in FaceDetailer?
A: No! Guide/max sizes don't guarantee consistent face pixel sizes across arbitrary inputs.
🧪 Roadmap
- [ ] Tiny demo workflows for SDXL / FLUX
- [x] Before→after comparison in README
- [ ] Before→after GIFs in README
🏷️ License
MIT — See LICENSE file for details.