Seamless image compositing with SDXL and ComfyUI

Introduction

AI-generated images have become an important tool in the design field, but how to naturally merge people into the landscape background to make the picture look realistic and artistic has always been a challenge. From movie special effects to social media content creation, the technology of seamless synthesis is becoming more and more important.

In this exploration, I used Stable Diffusion XL (SDXL) and ComfyUI for AI synthesis to seamlessly blend selfies into the landscape background. Here are the main problem need to be solved.


In this article, I will share the complete ComfyUI workflow and explore in depth how to use Mask processing, Inpainting repair, ControlNet assistance and other technologies to make the final effect more realistic.


Task Overview

Our goal is to use ComfyUI + SDXL to achieve the following steps:

The final output should be that the person blends seamlessly into the landscape, allowing AI to generate a natural, realistic composite image.

Here is the ComfyUI workflow https://drive.google.com/file/d/1NkC0odDj0jZODPvrbhufHp6c7XCjkc-v/view?usp=sharing

Model Preparation

Load RealVisXL V5.0 Checkpoint

Load ControlNet Model(ControlNet-Union SDXL 1.0)

LayerMask: Load BiRefNet Model V2

Load CLIP Vision

Load BiRefNet Ultra V2

Load Depth Anything V2 

Load Florence2 Model

IPAdapter Unified Loader


Introduction of the workflow

Workflow Overview

After you download all the models you need and put them into the right path, let’s come to the workflow.

Let’s say you have a portrait and you want to place your subject against a completely new background. The first step is to remove the existing background. This workflow is flexible enough to remove simple or complex backgrounds. Once the background is removed, the subject is repositioned into the new scene.

The real “magic” happens when we use the Lightning version of the SDXL model. The model is able to re-light and re-adjust the subject, ensuring that the light direction, highlights, and shadows match the new environment.

For example, if the original light came from the right, but the new scene is lit from the left, the SDXL model will adjust the subject’s lighting accordingly, making the subject look more natural and blend seamlessly with the shadows and highlights of the new background.

Nord Group 1:Load Mdels

In the first node group, our main task is to load the necessary models and prepare them for image processing. This step is very important because it lays the foundation for the entire workflow. Here are the specific steps:

Node Group 2: Remove Background

In the second node group, our goal is to effectively remove the background of the subject image. This step is crucial to isolating the subject and preparing it to blend into the new background. Here are the steps:

Node Group 3: Subject Relocalization, Prompt words Generation, and Canny Edge Image Creation

In this node group, we will focus on positioning the subject against the new background and refining the overall look of the image. Here are the steps:

Step1. Subject Positioning

Adjust the subject’s position against the background with this node group. This step ensures that the subject is aligned seamlessly with the new scene.

Step2. Blurring the background using depth map

Step3. Automatically generate prompt words

Use the “Florence2Run” node to automatically generate a caption. If you need a more detailed caption, you can set the “task” parameter to “more_detailed_caption” to generate a longer, more descriptive caption.

Step4. Canny edge image creation

Generate a Canny edge image to help control the outline of the subject. Reduce the threshold parameter until the outline of the subject becomes clear and distinct.

Node Group 4: Repaint and Shadow Adjustment

In this node group, we focus on redrawing to optimize the image, enhance the lighting and shadow effects, and ensure that the subject can blend naturally into the background. Here are the specific steps:

Step1. Image Redraw

Use this node group to redraw the image generated by the previous group. This step is essential for re-adjusting the lighting effects of the subject and creating shadows that help the subject blend naturally into the background.

Step2. Contrast and Adjust

Compare the newly generated image with the image from the previous group and notice the changes in highlights and shadows, which will make the picture look more harmonious. Although some original details may be lost, these details can be restored in the last group as long as the newly generated subject maintains the same outline as the original image.

Step3. Redraw Shadow Area:

Identify shadow needs on the floor around the subject. The initial redraw area may focus only on the subject, so it needs to be expanded to include the floor via a “Preview Bridge” node. This adjustment is critical to creating natural floor shadows.

Step4. Random processing

Run the workflow again after expanding the repainted area. Be prepared to change the seed in KSampler and try multiple iterations, as the shading will be different each time. Thanks to the Lightning version of the SDXL model, this process is very efficient, with only five sampling steps.

Step5. Resolve shadow inconsistencies:

If the shadows look broken or consistent with the original image, the problem may be that the Canny edges are limiting the shadow shape. To solve this problem, you can use the “Image Input Switch” node to select the outline of the subject as the outline image, which will form a more natural shadow.

Step6.Run Final workflow

Run the workflow again after making your adjustments. Observe how the shadows on the floor are evenly distributed, enhancing the realism of the image.

Recommended sampler and scheduler settings

Use the “dpmpp_sde” sampler and set the scheduler to “exponential” to ensure composition stability during processing.

By following the above steps, the subject can be naturally integrated into the new background, and the light and shadow effects can be greatly optimized.

Node Group 5: Creating and modifying shadow masks

In this final node group, we focus on generating and refining the shadowmask to improve the overall image quality. Here is a detailed description of the specific steps:

Step1. Generate an initial shadowmask

Step2. The subtraction operation produces an accurate mask

Performs a subtraction operation on the two resulting shadowmasks. This operation results in a smaller shadowmask, allowing for more precise shadow adjustments.

Step3. Modify the shadowmask

Use the Preview Bridge node to modify the shadow mask as needed. For example, if the mask accidentally covers certain areas on the floor, such as flower pots, you can manually paint to exclude those portions.

Step4. Enhanced shadow visibility

Use the Levels node to adjust the brightness of the shadow mask. By increasing the brightness of the mask, the shadows will be more prominent and noticeable in the final image, thus improving the realism of the image.

After making the adjustments above, you can create an optimized layer mask to enhance the shadows without affecting other elements of the image.

Node Group 6: Restoring Details and Adjusting Shadows

In this part, we aim to recover the details lost during the redraw process and fix the shadow blending problem. Here is a detailed guide of the steps:

Step1. Restore details via Image Detail Transfer

Use the Image Detail Transfer node to restore most of the details lost during the repaint process. This node requires two images and a mask: the destination image (with correct highlights and shadows) and the source image (with correct details). The subject mask limits the detail transfer to the subject.

Step2. Adjust shadow blending

If the shadows on the subject’s face are not blending evenly, adjust the Image Blur node’s blur_sigma or blur_radius values. This will help soften the shadows and make them more natural.

Step3. Solving the Floor Shadow Problem

If the shadows on the floor do not blend evenly due to fragmentation of the original shadows, be aware of the limitations of the Image Detail Transfer node, which cannot directly affect the background. This situation needs to be addressed separately through other methods.

Adjust the color using Color Blend:

Use the Color Blend node to adjust the image colors to match the original image, ensuring the overall tone is consistent.

Creating shadows via ImageBlend:

Use the ImageBlend node to blend the layer image with the background image. Set the blend mode to darker to darken specific areas and create shadows. The layer masks generated by the previous set define these shadow areas.

Step4. Recover highlights

Adjust the blending mode to “lighter” to restore highlights in the shadow areas. This step balances the shadows and highlights, thereby enhancing the depth and realism of the image.

Step5. Fine-tune highlights and shadows

Use the following nodes to further adjust the highlights and shadows. Set shadow_brightness to less than 1 to darken the shadows, and highlight_brightness to more than 1 to enhance the highlights. Modify the range of shadows and highlights to achieve the desired effect.

By following these steps and techniques, you can significantly improve the shadow quality and overall performance of your images.


Conclusion

This workflow gives you the power to enhance your images, achieving realistic and more visually appealing results by effectively managing shadows and backgrounds. By following the steps and tips above, you can achieve satisfying results. Remember, the key to mastering this workflow is to experiment. Be bold and explore new ideas and push the boundaries of your creativity. Each attempt will bring you closer to discovering innovative techniques and effects.

About the Author

Intern at Research Graph Foundation | ORCID |  + posts