{"id":14464,"date":"2025-08-14T14:21:32","date_gmt":"2025-08-14T13:21:32","guid":{"rendered":"https:\/\/www.keris-studio.fr\/blog\/?p=14464"},"modified":"2025-08-22T10:33:47","modified_gmt":"2025-08-22T09:33:47","slug":"comfyui-ai-for-architecture-case-study-01-render-from-basic-3d","status":"publish","type":"post","link":"https:\/\/www.keris-studio.fr\/blog\/?p=14464","title":{"rendered":"COMFYUI &#8211; AI\/Archi01\u00a0: Render from 3D"},"content":{"rendered":"<h1>Task<\/h1>\n<p><strong>ComfyUI Tutorial for Architecture: From Sketch to Realistic Render with ControlNet Canny<\/strong><\/p>\n<p>If you want to transform a simple sketch or plan into a detailed and realistic architectural render, ComfyUI, with its modular structure, is the perfect tool for this. In this tutorial, we will explore a simple yet powerful workflow using ControlNet Canny to turn a basic drawing into a high-quality image.<\/p>\n<p>This tutorial is designed for beginners with ComfyUI. We will walk through each step, from importing models to achieving the final result.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1245\" height=\"663\" class=\"wp-image-14465\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-1.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-1.png 1245w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-1-300x160.png 300w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-1-1024x545.png 1024w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-1-768x409.png 768w\" sizes=\"auto, (max-width: 1245px) 100vw, 1245px\" \/><!--more--><\/p>\n<h1>The Step-by-Step Workflow<\/h1>\n<p>Here is a general overview of the workflow we will use. Each group of nodes represents a logical step in the process.<\/p>\n<h2>Step 1: Load Models<\/h2>\n<p>The first step is to load all the necessary models and tools for the process to work.<\/p>\n<ul>\n<li><strong>CheckpointLoaderSimple<\/strong>: This is the core of our workflow. This node loads a Stable Diffusion model. This model understands our text \u00ab\u00a0prompt\u00a0\u00bb and generates the image. For architecture and renders, models like \u00ab\u00a0<strong>DreamShaper<\/strong>\u00a0\u00bb or \u00ab\u00a0<strong>realisticVision<\/strong>\u00a0\u00bb are excellent choices. This node provides three outputs: the<\/li>\n<\/ul>\n<p><strong>MODEL<\/strong> (the base model), the <strong>CLIP<\/strong> (the text encoder that interprets our instructions), and the <strong>VAE<\/strong> (the decoder that transforms the latent image into the final image).<\/p>\n<ul>\n<li><strong>VAELoader<\/strong>: Although CheckpointLoaderSimple can load a default VAE, it is often better to load a separate one for higher quality. The VAE (Variational AutoEncoder) is essential for the decoding phase, as it converts the abstract latent space into a readable image.<\/li>\n<li><strong>ControlNetLoader<\/strong>: This node is the key component of our process. It loads a ControlNet model, which will guide the image generation based on an input image. In this case, the \u00ab\u00a0Canny\u00a0\u00bb model is ideal for preserving the contours and lines of our architectural drawing.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"440\" height=\"479\" class=\"wp-image-14466\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-2.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-2.png 440w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-2-276x300.png 276w\" sizes=\"auto, (max-width: 440px) 100vw, 440px\" \/><\/p>\n<h2>Step 2: Prepare the Input Image<\/h2>\n<p>This section handles importing your sketch and processing it so it can be understood by ControlNet.<\/p>\n<ul>\n<li><strong>LoadImage<\/strong>: This is where you upload your sketch or plan. Choose your file, such as a facade sketch or an interior drawing.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"516\" height=\"549\" class=\"wp-image-14467\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-3.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-3.png 516w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-3-282x300.png 282w\" sizes=\"auto, (max-width: 516px) 100vw, 516px\" \/><\/p>\n<ul>\n<li><strong>Canny<\/strong>: This node is a \u00ab\u00a0preprocessor\u00a0\u00bb. It takes your input image and extracts its contours and edges. The Canny model is excellent for accurately capturing architectural lines. You can adjust the thresholds (<\/li>\n<\/ul>\n<p>low_threshold and high_threshold) to control the fineness of the detected edges. The other two nodes connected to it,<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"297\" height=\"120\" class=\"wp-image-14468\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-4.png\" \/><\/p>\n<p>GetImageSize+ and JWImageResizeByLongerSide, automatically resize and define the image dimensions, ensuring your base image is suitable for the model.<\/p>\n<ul>\n<li><strong>PreviewImage<\/strong>: Although not essential for the final result, this node is very useful for visualizing the output of the Canny preprocessor in real time. This way, you can see what the contours that ControlNet will use as a guide look like.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"449\" height=\"388\" class=\"wp-image-14469\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-5.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-5.png 449w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-5-300x259.png 300w\" sizes=\"auto, (max-width: 449px) 100vw, 449px\" \/><\/p>\n<h2>Step 3: Enter Instructions (Prompt)<\/h2>\n<p>This step is crucial for telling the model what you want to generate.<\/p>\n<ul>\n<li><strong>CLIPTextEncode (positive)<\/strong>: This is your \u00ab\u00a0positive prompt\u00a0\u00bb. Here, you describe in detail the image you want to obtain. Feel free to be very specific about the style (modern, minimalist), materials (wood, concrete), lighting (natural, soft), and atmosphere. Use keywords that evoke well-known references like \u00ab\u00a0architectural digest\u00a0\u00bb or \u00ab\u00a0professional photography\u00a0\u00bb.<\/li>\n<li><strong>CLIPTextEncode (negative)<\/strong>: This is your \u00ab\u00a0negative prompt\u00a0\u00bb. It is used to tell the model what you do not want to see in the result. For example, rendering flaws (\u00ab\u00a0blurry\u00a0\u00bb, \u00ab\u00a0low quality\u00a0\u00bb), undesirable colors (\u00ab\u00a0oversaturated colors\u00a0\u00bb), or non-architectural elements (\u00ab\u00a0cartoon\u00a0\u00bb, \u00ab\u00a0anime\u00a0\u00bb).<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"627\" height=\"618\" class=\"wp-image-14470\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-6.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-6.png 627w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-6-300x296.png 300w\" sizes=\"auto, (max-width: 627px) 100vw, 627px\" \/><\/p>\n<h2>Step 4: The Generation Process (KSampler &amp; Final Output)<\/h2>\n<p>This last part assembles all the elements to create the final image.<\/p>\n<ul>\n<li><strong>ControlNetApplyAdvanced<\/strong>: This node receives the positive and negative prompts, as well as the contours of your image (output of the Canny node). It applies the ControlNet model to combine the influence of your prompts with the information from your initial drawing. The outputs of this node are the \u00ab\u00a0conditions\u00a0\u00bb for our process, which tell the KSampler how to behave.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"449\" height=\"310\" class=\"wp-image-14471\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-7.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-7.png 449w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-7-300x207.png 300w\" sizes=\"auto, (max-width: 449px) 100vw, 449px\" \/><\/p>\n<ul>\n<li><strong>EmptyLatentImage<\/strong>: This node creates an \u00ab\u00a0empty\u00a0\u00bb image in the latent space, which will serve as the starting point for generation. The dimensions of this image are automatically defined by the<\/li>\n<\/ul>\n<p>In this example, we resize the image according to the input image.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"926\" height=\"293\" class=\"wp-image-14472\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-8.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-8.png 926w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-8-300x95.png 300w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-8-768x243.png 768w\" sizes=\"auto, (max-width: 926px) 100vw, 926px\" \/> GetImageSize+ node to match your input image.<\/p>\n<ul>\n<li><strong>KSampler<\/strong>: This is the generation engine. The KSampler takes the model, the conditions (prompts), and the empty latent image to generate a new latent image, following the given instructions. You can adjust key parameters here such as the seed (for reproducibility), the steps (quality), and the cfg (the level of prompt adherence).<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"412\" height=\"625\" class=\"wp-image-14473\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-9.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-9.png 412w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-9-198x300.png 198w\" sizes=\"auto, (max-width: 412px) 100vw, 412px\" \/><\/p>\n<ul>\n<li><strong>VAEDecode<\/strong>: Once the KSampler has generated the latent image, this node uses the VAE (which we loaded at the beginning) to decode this latent image and transform it into a visible color image.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"332\" height=\"134\" class=\"wp-image-14474\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-10.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-10.png 332w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-10-300x121.png 300w\" sizes=\"auto, (max-width: 332px) 100vw, 332px\" \/><\/p>\n<ul>\n<li><strong>SaveImage<\/strong>: Finally, this node saves the final render to your computer.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"737\" height=\"542\" class=\"wp-image-14475\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-11.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-11.png 737w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-11-300x221.png 300w\" sizes=\"auto, (max-width: 737px) 100vw, 737px\" \/><\/p>\n<h1>Tips for Architecture with ComfyUI<\/h1>\n<ul>\n<li><strong>Be precise in your prompts<\/strong>: The more detailed your description, the more the result will match your vision.<\/li>\n<li><strong>Use quality base images<\/strong>: Even if Canny can extract contours, a clear base drawing without unnecessary elements will yield better results.<\/li>\n<li><strong>Test different models<\/strong>: Feel free to try other Stable Diffusion and ControlNet models to find the one that best suits your architectural style.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1107\" class=\"wp-image-14476\" src=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-scaled.png\" srcset=\"https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-scaled.png 2560w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-300x130.png 300w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-1024x443.png 1024w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-768x332.png 768w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-1536x664.png 1536w, https:\/\/www.keris-studio.fr\/blog\/wp-content\/word-image-14464-12-2048x886.png 2048w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/p>\n<h1>French translation<\/h1>\n<p><strong>\u00c9tape 1 : Chargement des Mod\u00e8les (Load Models)<\/strong><\/p>\n<p>La premi\u00e8re \u00e9tape consiste \u00e0 charger tous les mod\u00e8les et outils n\u00e9cessaires pour que le processus fonctionne.<\/p>\n<ul>\n<li><strong>CheckpointLoaderSimple<\/strong> : C&rsquo;est le c\u0153ur de notre workflow. Ce n\u0153ud charge un mod\u00e8le de diffusion stable (Stable Diffusion). C&rsquo;est ce mod\u00e8le qui comprendra notre \u00ab\u00a0prompt\u00a0\u00bb (texte) et g\u00e9n\u00e9rera l&rsquo;image. Pour l&rsquo;architecture et les rendus, les mod\u00e8les comme \u00ab\u00a0DreamShaper\u00a0\u00bb ou \u00ab\u00a0realisticVision\u00a0\u00bb sont d&rsquo;excellents choix. Ce n\u0153ud fournit trois sorties : le<\/li>\n<\/ul>\n<p><strong>MODEL<\/strong> (le mod\u00e8le de base), le <strong>CLIP<\/strong> (l&rsquo;encodeur de texte qui interpr\u00e8te nos instructions), et le <strong>VAE<\/strong> (le d\u00e9codeur qui transforme l&rsquo;image latente en image finale).<\/p>\n<ul>\n<li><strong>VAELoader<\/strong> : Bien que le CheckpointLoaderSimple puisse charger un VAE par d\u00e9faut, il est souvent pr\u00e9f\u00e9rable d&rsquo;en charger un s\u00e9par\u00e9ment pour une meilleure qualit\u00e9. Le VAE (Variational AutoEncoder) est essentiel pour la phase de d\u00e9codage, car il convertit l&rsquo;espace latent abstrait en une image lisible.<\/li>\n<li><strong>ControlNetLoader<\/strong> : Ce n\u0153ud est l&rsquo;\u00e9l\u00e9ment cl\u00e9 de notre processus. Il charge un mod\u00e8le ControlNet, qui permettra de guider la g\u00e9n\u00e9ration de l&rsquo;image en se basant sur une image d&rsquo;entr\u00e9e. Dans ce cas, le mod\u00e8le \u00ab\u00a0Canny\u00a0\u00bb est id\u00e9al pour pr\u00e9server les contours et les lignes de notre dessin architectural.<\/li>\n<\/ul>\n<p><strong>\u00c9tape 2 : Pr\u00e9paration de l&rsquo;Image d&rsquo;Entr\u00e9e (Preprocess the Image)<\/strong><\/p>\n<p>Cette section g\u00e8re l&rsquo;importation de votre esquisse et son traitement pour qu&rsquo;elle puisse \u00eatre comprise par ControlNet.<\/p>\n<ul>\n<li><strong>LoadImage<\/strong> : C&rsquo;est ici que vous t\u00e9l\u00e9chargez votre esquisse ou votre plan. Choisissez votre fichier, comme une esquisse de fa\u00e7ade ou un dessin int\u00e9rieur.<\/li>\n<li><strong>Canny<\/strong> : Ce n\u0153ud est un \u00ab\u00a0pr\u00e9processeur\u00a0\u00bb. Il prend votre image d&rsquo;entr\u00e9e et en extrait les contours et les bords. Le mod\u00e8le Canny est excellent pour capturer les lignes architecturales de mani\u00e8re pr\u00e9cise. Vous pouvez ajuster les seuils (low_threshold et high_threshold) pour contr\u00f4ler la finesse des bords d\u00e9tect\u00e9s. Les deux autres n\u0153uds qui lui sont connect\u00e9s,<\/li>\n<\/ul>\n<p>GetImageSize+ et JWImageResizeByLongerSide permettent de redimensionner et de d\u00e9finir les dimensions de l&rsquo;image de mani\u00e8re automatique, garantissant que votre image de base est adapt\u00e9e au mod\u00e8le.<\/p>\n<ul>\n<li><strong>PreviewImage<\/strong> : Ce n\u0153ud, bien que non indispensable au r\u00e9sultat final, est tr\u00e8s utile pour visualiser en temps r\u00e9el le r\u00e9sultat du pr\u00e9processeur Canny. Vous verrez ainsi \u00e0 quoi ressemblent les contours que ControlNet utilisera comme guide.<\/li>\n<\/ul>\n<p><strong>\u00c9tape 3 : Saisie des Instructions (Prompt)<\/strong><\/p>\n<p>Cette \u00e9tape est cruciale pour indiquer au mod\u00e8le ce que vous voulez g\u00e9n\u00e9rer.<\/p>\n<ul>\n<li><strong>CLIPTextEncode (positif)<\/strong> : C&rsquo;est votre \u00ab\u00a0prompt positif\u00a0\u00bb. C&rsquo;est ici que vous d\u00e9crivez en d\u00e9tail l&rsquo;image que vous souhaitez obtenir. N&rsquo;h\u00e9sitez pas \u00e0 \u00eatre tr\u00e8s pr\u00e9cis sur le style (moderne, minimaliste), les mat\u00e9riaux (bois, b\u00e9ton), l&rsquo;\u00e9clairage (naturel, doux), et l&rsquo;ambiance. Utilisez des mots-cl\u00e9s qui \u00e9voquent des r\u00e9f\u00e9rences connues comme \u00ab\u00a0architectural digest\u00a0\u00bb ou \u00ab\u00a0professional photography\u00a0\u00bb.<\/li>\n<li><strong>CLIPTextEncode (n\u00e9gatif)<\/strong> : C&rsquo;est votre \u00ab\u00a0prompt n\u00e9gatif\u00a0\u00bb. Il sert \u00e0 indiquer au mod\u00e8le ce que vous ne voulez pas voir dans le r\u00e9sultat. Par exemple, des d\u00e9fauts de rendu (\u00ab\u00a0blurry\u00a0\u00bb, \u00ab\u00a0low quality\u00a0\u00bb), des couleurs ind\u00e9sirables (\u00ab\u00a0oversaturated colors\u00a0\u00bb) ou des \u00e9l\u00e9ments non architecturaux (\u00ab\u00a0cartoon\u00a0\u00bb, \u00ab\u00a0anime\u00a0\u00bb).<\/li>\n<\/ul>\n<p><strong>\u00c9tape 4 : Le Processus de G\u00e9n\u00e9ration (KSampler &amp; Final Output)<\/strong><\/p>\n<p>Cette derni\u00e8re partie assemble tous les \u00e9l\u00e9ments pour cr\u00e9er l&rsquo;image finale.<\/p>\n<ul>\n<li><strong>ControlNetApplyAdvanced<\/strong> : Ce n\u0153ud re\u00e7oit les prompts positifs et n\u00e9gatifs, ainsi que les contours de votre image (sortie du n\u0153ud Canny). Il applique le mod\u00e8le ControlNet pour combiner l&rsquo;influence de vos prompts avec les informations de votre dessin initial. Les sorties de ce n\u0153ud sont les \u00ab\u00a0conditions\u00a0\u00bb de notre processus, qui indiquent au KSampler comment se comporter.<\/li>\n<li><strong>EmptyLatentImage<\/strong> : Ce n\u0153ud cr\u00e9e une image \u00ab\u00a0vide\u00a0\u00bb dans l&rsquo;espace latent, qui servira de point de d\u00e9part pour la g\u00e9n\u00e9ration. Les dimensions de cette image sont automatiquement d\u00e9finies par le n\u0153ud<\/li>\n<\/ul>\n<p>GetImageSize+ pour correspondre \u00e0 votre image d&rsquo;entr\u00e9e.<\/p>\n<ul>\n<li><strong>KSampler<\/strong> : C&rsquo;est le moteur de la g\u00e9n\u00e9ration. Le KSampler prend le mod\u00e8le, les conditions (prompts), et l&rsquo;image latente vide pour g\u00e9n\u00e9rer une nouvelle image latente, en suivant les instructions donn\u00e9es. Vous pouvez y r\u00e9gler des param\u00e8tres cl\u00e9s comme le<\/li>\n<\/ul>\n<p>seed (pour la reproductibilit\u00e9), les steps (la qualit\u00e9), et le cfg (le niveau de respect du prompt).<\/p>\n<ul>\n<li><strong>VAEDecode<\/strong> : Une fois que le KSampler a g\u00e9n\u00e9r\u00e9 l&rsquo;image latente, ce n\u0153ud utilise le VAE (que nous avons charg\u00e9 au d\u00e9but) pour d\u00e9coder cette image latente et la transformer en une image couleur visible.<\/li>\n<li><strong>SaveImage<\/strong> : Enfin, ce n\u0153ud enregistre le rendu final sur votre ordinateur.<\/li>\n<\/ul>\n<p><strong>Conseils pour l&rsquo;Architecture avec ComfyUI<\/strong><\/p>\n<ul>\n<li><strong>Soyez pr\u00e9cis dans vos prompts :<\/strong> Plus votre description est d\u00e9taill\u00e9e, plus le r\u00e9sultat sera conforme \u00e0 votre vision.<\/li>\n<li><strong>Utilisez des images de base de qualit\u00e9 :<\/strong> M\u00eame si Canny peut extraire les contours, un dessin de base clair et sans \u00e9l\u00e9ments inutiles donnera de meilleurs r\u00e9sultats.<\/li>\n<li><strong>Testez diff\u00e9rents mod\u00e8les :<\/strong> N&rsquo;h\u00e9sitez pas \u00e0 essayer d&rsquo;autres mod\u00e8les de diffusion stable et de ControlNet pour trouver celui qui correspond le mieux \u00e0 votre style architectural.<\/li>\n<\/ul>\n<h1>Mod\u00e8les<\/h1>\n<ul>\n<li>\uf0b7 <strong>Mod\u00e8le de Checkpoint (mod\u00e8le principal de Stable Diffusion) :<\/strong> realisticVisionV60B1_v51VAE.safetensors<\/li>\n<li>\uf0b7 <strong>Mod\u00e8le ControlNet :<\/strong> control_v11p_sd15_scribble_fp16.safetensors<\/li>\n<li>\uf0b7 <strong>Mod\u00e8le VAE :<\/strong> vae-ft-mse-840000-ema-pruned.safetensors<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Task ComfyUI Tutorial for Architecture: From Sketch to Realistic Render with ControlNet Canny If you want to transform a simple sketch or plan into a detailed and realistic architectural render, ComfyUI, with its modular structure, is the perfect tool for this. In this tutorial, we will explore a simple yet powerful workflow using ControlNet Canny &hellip; <a href=\"https:\/\/www.keris-studio.fr\/blog\/?p=14464\" class=\"more-link\">Continuer la lecture de <span class=\"screen-reader-text\">COMFYUI &#8211; AI\/Archi01\u00a0: Render from 3D<\/span>  <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":14475,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[190,593,14,25,8],"tags":[57,546,600,386,387],"class_list":["post-14464","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-architecture-2","category-artificial","category-conception","category-etats-de-lart","category-methodologie","tag-architecture","tag-artificial-intelligence","tag-comfyui","tag-methodology","tag-workflow"],"_links":{"self":[{"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/posts\/14464","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14464"}],"version-history":[{"count":3,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/posts\/14464\/revisions"}],"predecessor-version":[{"id":14636,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/posts\/14464\/revisions\/14636"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=\/wp\/v2\/media\/14475"}],"wp:attachment":[{"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14464"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14464"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.keris-studio.fr\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14464"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}