Flux Canny Control
In this chapter, we will introduce how to use Flux Canny Control to generate images with similar edge structures.
This image is generated by AI
You can use Flux Canny Control to generate an image with a similar edge structure to the original image, like the image shown below.
I imported a black and white sketch of the main character from One Piece, and then used the Flux Canny Control model to generate a color image with a similar edge structure to the original image. You can see that except for a slight change in expression, the overall outline is almost the same as the original image.
There are many scenarios that are suitable for using the Flux Canny Control model. For example:
- Coloring sketch images.
- Redesigning App Logo. For example, turn the TikTok logo into a cartoon style image.
- Redesigning a product. For example, input an image of a handbag, and then let Flux model generate a handbag with different colors or materials but the same style.
- Redesigning a game character. For example, input an image of a warrior wearing metal armor, and then let Flux model generate a warrior wearing other materials or colors.
1. Download Flux Canny Model
There are two types of Flux Canny models, one is Flux’s official implementation of Flux Canny LoRA, and the other is a Flux Canny Controlnet implemented by an open-source community. The former is based on the Control LoRA technology, while the latter is based on the ControlNet technology.
- Flux official provides two Canny model files, you can choose according to your needs:
- One is the model that includes Flux Canny LoRA and Flux Dev. You can download this model from here, and then place it in the
/models/diffusion_models/
folder. However, because this model includes the Flux Dev model, it is quite large, about 24GB. But the good thing is that you only need to load one model when loading the model. - If you don’t want to download such a large model, you can download the separate Flux Canny LoRA model from here, and then place it in the
models/loras/
folder. This model file is only about 1.24GB.
- One is the model that includes Flux Canny LoRA and Flux Dev. You can download this model from here, and then place it in the
- There are currently two open-source organizations that have implemented the Flux Canny Controlnet model, you can choose according to your needs:
- The first is the Canny ControlNet model jointly developed by Shakker-Labs and InstantX. You need to download this model and place it in the
models/controlnet/
folder. Also, note that this file can be renamed when downloading, renamed asinstantx_flux_canny.safetensors
, for easier subsequent use. - The other is the Flux Canny controlnet v3 model developed by XLabs. You also need to download this model and place it in the
models/controlnet/
folder.
- The first is the Canny ControlNet model jointly developed by Shakker-Labs and InstantX. You need to download this model and place it in the
2. Flux Canny LoRA Workflow
Download the model, and then we will build the Flux Canny ComfyUI workflow. First, you can load the complete Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow.
If you don’t want to manually connect, you can download this workflow template from Comflowy and import it into your local ComfyUI. (If you don’t know how to download Comflowy templates, you can refer to this tutorial.)
Flux ComfyUI Workflow Template
Click the Remix button in the upper right corner to enter the ComfyUI workflow mode.
The most significant change is that there is an additional Flux Canny node group. This node group converts the image to Conditioning, and then inputs it together with the Prompt to the Flux model:
First, you need to add an InstructPixToConditioning
node (Figure ①). It is very similar to the Redux workflow. It is also connected to the CLIP-ed FluxGuidance node. Then this node needs to be connected to a Canny
node (Figure ②). Then you need to add a Load Image
node (Figure ③), and input the image you need to process into it.
If you want, you can also add a Preview Image
node (Figure ④) after the Canny
node, for previewing the Canny image.
Finally, input the Prompt, click the Generate button, and generate the image.
Here’s a small tip, if you want to generate an image with a more consistent edge structure than the original image, you can try to set the Low Threshold
and High Threshold
of the Canny
node higher. If you only want the general outline, you can adjust these values smaller, so the Canny image will have fewer edges, and the space for AI to play will naturally be more.
3. Flux Canny Controlnet Workflow
3.1 InstantX Version
First, because running the ControlNet model will occupy extra GPU memory, I recommend you load the GGUF version Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow. Alternatively, go to Comflowy to download this workflow and import it.
This version, you need to:
- Add the
Apply ControlNet with VAE
node (Figure ①). Then connect it to theLoad VAE
,Load ControlNet Model
(Figure ②), and anyCanny
node (Figure ③). There is a small detail to note, if you use Canny, it is best to set the strength to 0.7, the effect will be much better. - The
Load ControlNet Model
node selects the downloaded Flux Canny Controlnet model. - To make the generated image consistent in size with the input Canny image, I also added a
Get Image Size
node (Figure ④), and connected it to theSD3LatentImage
node (Figure ⑤). This way, the generated image size will be consistent with the input Canny image size.
If you don’t want to use the GGUF version, you can also use the FP8 version provided by ComfyUI. Just remove the Load VAE
and DualCLIPLoader
nodes, and replace them with the Load Checkpoint
node. Then select the FP8 version of the Flux model.
After my personal testing, I feel that the InstantX version of the Flux Canny Controlnet model works better. In the original image, it was a frowning Luffy, but in the Flux official Canny model, the frowning Luffy turned into a smiling Luffy. The Canny edge does not work as well as the InstantX version.
3.2 XLabs Version
Using the XLabs version of the Flux Canny Controlnet model requires using the XLabs developed plugin. This is their Github plugin address. You can install this plugin through ComfyUI’s ComfyUI-Manager. For detailed installation methods, please refer to Install ComfyUI Plugin article.
After installing, you can modify the workflow of the Shakker-Labs version. Alternatively, go to Comflowy to download this workflow and import it.
Modifying it is not too difficult:
Figure | Description |
---|---|
① | Replace the KSampler node with the Xlabs Sampler node. You can see that this node has an additional controlnet_condition input. |
② | I replaced the CLIPTextEncode node with the CLIPTextEncodeFlux node. You don’t need to modify it. I just want to tell you that there are other Flux CLIP nodes. This node can control Clip l and t5xxl separately. |
③ | Replace the Apply ControlNet VAE node with the Apply Flux ControlNet node. |
④ | Replace the Load ControlNet Model node with the Load Flux ControlNet node. And select the downloaded Flux Canny Controlnet model. |
⑤ | I also used the Empty Latent Image node. This is to show that this node can also use the most basic version besides the SD3 version. |
Personally, I feel that the XLabs version of the Flux Canny Controlnet model works just average.
4. Flux Canny API Workflow
If your computer performance is not sufficient to run the Flux Canny model, you can also try using Comflowy’s Canny API node in ComfyUI. Of course, you can also directly use the Flux Canny API node in Comflowy, with a simple connection method that requires just one node, and it also supports Flux Pro and Dev versions.