Archives par mot-clé : methodology

COMFYUI – AI/Archi06 : Inpainting

Task

This tutorial guides you through the provided workflow to integrate new content into an existing image using an inpainting model.


(guess where is the AI content!!)

Inpainting is the process of intelligently filling in or replacing a masked area of an image with new content. It’s a powerful technique that goes beyond simple cropping or pasting, as the AI generates new pixels that seamlessly blend with the surrounding unmasked area. Continuer la lecture de COMFYUI – AI/Archi06 : Inpainting

COMFYUI – AI/Archi05 : Sketch to Render

Task

This tutorial shows how to get a photorealistic render from a sketch. The tutorial explains how to use it to transform a sketch or a simple drawing into a detailed render. This workflow leverages the Flux.1 Kontext Dev model, which is optimized for in-context image editing, then Flux with Controlnet and also SD3.5 with Controlnet.

Continuer la lecture de COMFYUI – AI/Archi05 : Sketch to Render

COMFYUI – AI/Archi01 : Render from 3D

Task

ComfyUI Tutorial for Architecture: From Sketch to Realistic Render with ControlNet Canny

If you want to transform a simple sketch or plan into a detailed and realistic architectural render, ComfyUI, with its modular structure, is the perfect tool for this. In this tutorial, we will explore a simple yet powerful workflow using ControlNet Canny to turn a basic drawing into a high-quality image.

This tutorial is designed for beginners with ComfyUI. We will walk through each step, from importing models to achieving the final result.

Continuer la lecture de COMFYUI – AI/Archi01 : Render from 3D