Generative AI Domus
2024
Client
Comfy UI
The Challenge
Creating high-quality dome visuals using ComfyUI required balancing artistic control, model consistency, and technical stability across a complex AI-driven pipeline. The challenge was not only generating visually compelling results, but doing so in a controlled, repeatable way suitable for production use.
I worked with multiple Stable Diffusion models combined with carefully selected LoRAs to achieve specific architectural moods, materials, and lighting behaviors. Managing model compatibility, LoRA weighting, and prompt coherence was critical to avoid visual artifacts, style drift, or loss of structural integrity especially in curved dome geometries where distortion is common.
A key challenge was maintaining detail and realism while scaling the images beyond their native resolution. This involved testing and combining different upscaling methods within ComfyUI, balancing sharpness, texture fidelity, and noise control. Each upscale pass required fine tuning to preserve architectural readability without introducing AI-generated artifacts or over-sharpening.
The overall complexity lay in orchestrating a modular ComfyUI workflow that allowed fast iteration, precise control over AI outputs, and consistent visual quality bridging generative AI experimentation with production-ready results suitable for architectural visualization and concept development.
Environment Stills
COMFYUI WORKFLOW
The process was built using a modular ComfyUI pipeline combining Stable Diffusion models with structured prompt conditioning, IP-Adapter, and ControlNet to maintain architectural coherence and dome geometry. Reference images guide composition and lighting, while carefully tuned sampling ensures clean surfaces and consistent materials. Final upscaling stages enhance detail and resolution, delivering controlled, production-ready results.


