Task
This tutorial guides you through the provided workflow to extend an image.
Understanding Outpainting
Outpainting, also known as uncropping, is a process that extends an image beyond its original borders by generating new content that seamlessly blends with the existing picture. Unlike inpainting, which fills a masked area within an image, outpainting adds new content to the empty space around an image.
- How it Works: In a ComfyUI workflow, outpainting is achieved by adding a border or « padding » around the original image. A mask is then created for this new empty space. A text prompt is used to guide the AI on what new content to generate in this padded area, such as a different background or a wider landscape. The AI then fills the masked area with new pixels that match the style and content of the original image, effectively extending it.
- Why Use It: Outpainting is useful for:
- Creating Panoramas: You can extend a photo to create a wider, more dramatic view.
- Changing Aspect Ratio: You can transform a portrait image into a landscape one by adding content to the sides.
- Expanding a Scene: You can add a new background or context to a subject that was originally shot close up.
Step 0: creating an image
The visual result is very much conditioned by the image’s proportions. Vertical image gives vertical results and horizontal…Sometimes we want a vertical composition in a horizontal frame. This is one case where Outpainting can be useful.
Horizontal frame
Vertical frame
Step 1: Load the Models
You need to load the core models that power the outpainting process.
- UNETLoader: This node loads the diffusion model, flux1-fill-dev.safetensors, which is specifically designed for inpainting and outpainting.
- DualCLIPLoader: This node loads the clip_l.safetensors and t5xxl_fp16.safetensors text encoders.
- VAELoader: This node loads the ae.safetensors VAE model.
Step 2: Upload and Prepare Your Image
This is the most critical part of an outpainting workflow, as it sets up the image and mask for the AI.
- LoadImage: Use this node to upload the image you want to extend.
- ImagePadForOutpaint: This node adds a border or « padding » around your original image. The left, top, right, and bottom settings allow you to specify how much padding to add on each side. It also automatically creates a mask for the newly added padded area.
The ImagePadForOutpaint node is used to prepare an image for the outpainting process. It takes an input image and adds a new, empty border around it, creating the canvas for the AI to fill in. This node automatically generates a mask that covers only this new padded area, while leaving the original image unmasked.
The feathering parameter is crucial for achieving a seamless outpainting result. It determines how a smooth transition is created between the original image and the newly generated content. A higher feathering value creates a wider gradient at the border, allowing the AI to blend the new content more gradually into the original image. This helps to avoid harsh lines or abrupt changes where the new content meets the old, making the outpainted image appear as if it was a single, continuous shot.
Step 3: Define Your Prompts
The prompts tell the AI what content to generate in the padded area.
- CLIPTextEncode (Positive Prompt): This node is where you write the prompt describing the new content. The example prompt is « weavy ocean, cloudy sky, cliff », which would extend the image with a new background of an ocean and sky.
- CLIPTextEncode (Negative Prompt): This optional node specifies any content to be excluded from the generated image. The example workflow has an empty negative prompt.
Step 4: Sample and Decode
The final steps involve processing the inputs and generating the outpainted image.
- DifferentialDiffusion: This node prepares the model for the outpainting task.
- FluxGuidance: This node takes the conditioning from your positive prompt and applies guidance. The example has a guidance of 30.
- InpaintModelConditioning: This node takes the padded image, the outpainting mask, and the conditioned prompts to prepare a new latent image for the sampler.
- KSampler: This is the main diffusion sampler that generates the new outpainted image. The workflow sets the steps to 20 and the cfg to 1.
- VAEDecode: This node decodes the latent image from the KSampler into a final viewable image.
- SaveImage: The final node saves the outpainted image to your computer.