In this chapter, we will introduce how to use Flux Canny Control to generate images with similar edge structures.
This image is generated by AI
You can use Flux Canny Control to generate an image with a similar edge structure to the original image, like the image shown below.
I imported a black and white sketch of the main character from One Piece, and then used the Flux Canny Control model to generate a color image with a similar edge structure to the original image. You can see that except for a slight change in expression, the overall outline is almost the same as the original image.
There are many scenarios that are suitable for using the Flux Canny Control model. For example:
There are two types of Flux Canny models, one is Flux’s official implementation of Flux Canny LoRA, and the other is a Flux Canny Controlnet implemented by an open-source community. The former is based on the Control LoRA technology, while the latter is based on the ControlNet technology.
/models/diffusion_models/
folder. However, because this model includes the Flux Dev model, it is quite large, about 24GB. But the good thing is that you only need to load one model when loading the model.models/loras/
folder. This model file is only about 1.24GB.models/controlnet/
folder. Also, note that this file can be renamed when downloading, renamed as instantx_flux_canny.safetensors
, for easier subsequent use.models/controlnet/
folder.Download the model, and then we will build the Flux Canny ComfyUI workflow. First, you can load the complete Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow.
If you don’t want to manually connect, you can download this workflow template from Comflowy and import it into your local ComfyUI. (If you don’t know how to download Comflowy templates, you can refer to this tutorial.)
Click the Remix button in the upper right corner to enter the ComfyUI workflow mode.
The most significant change is that there is an additional Flux Canny node group. This node group converts the image to Conditioning, and then inputs it together with the Prompt to the Flux model:
First, you need to add an InstructPixToConditioning
node (Figure ①). It is very similar to the Redux workflow. It is also connected to the CLIP-ed FluxGuidance node. Then this node needs to be connected to a Canny
node (Figure ②). Then you need to add a Load Image
node (Figure ③), and input the image you need to process into it.
If you want, you can also add a Preview Image
node (Figure ④) after the Canny
node, for previewing the Canny image.
Finally, input the Prompt, click the Generate button, and generate the image.
Here’s a small tip, if you want to generate an image with a more consistent edge structure than the original image, you can try to set the Low Threshold
and High Threshold
of the Canny
node higher. If you only want the general outline, you can adjust these values smaller, so the Canny image will have fewer edges, and the space for AI to play will naturally be more.
First, because running the ControlNet model will occupy extra GPU memory, I recommend you load the GGUF version Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow. Alternatively, go to Comflowy to download this workflow and import it.
This version, you need to:
Apply ControlNet with VAE
node (Figure ①). Then connect it to the Load VAE
, Load ControlNet Model
(Figure ②), and any Canny
node (Figure ③). There is a small detail to note, if you use Canny, it is best to set the strength to 0.7, the effect will be much better.Load ControlNet Model
node selects the downloaded Flux Canny Controlnet model.Get Image Size
node (Figure ④), and connected it to the SD3LatentImage
node (Figure ⑤). This way, the generated image size will be consistent with the input Canny image size.If you don’t want to use the GGUF version, you can also use the FP8 version provided by ComfyUI. Just remove the Load VAE
and DualCLIPLoader
nodes, and replace them with the Load Checkpoint
node. Then select the FP8 version of the Flux model.
After my personal testing, I feel that the InstantX version of the Flux Canny Controlnet model works better. In the original image, it was a frowning Luffy, but in the Flux official Canny model, the frowning Luffy turned into a smiling Luffy. The Canny edge does not work as well as the InstantX version.
Using the XLabs version of the Flux Canny Controlnet model requires using the XLabs developed plugin. This is their Github plugin address. You can install this plugin through ComfyUI’s ComfyUI-Manager. For detailed installation methods, please refer to Install ComfyUI Plugin article.
After installing, you can modify the workflow of the Shakker-Labs version. Alternatively, go to Comflowy to download this workflow and import it.
Modifying it is not too difficult:
Figure | Description |
---|---|
① | Replace the KSampler node with the Xlabs Sampler node. You can see that this node has an additional controlnet_condition input. |
② | I replaced the CLIPTextEncode node with the CLIPTextEncodeFlux node. You don’t need to modify it. I just want to tell you that there are other Flux CLIP nodes. This node can control Clip l and t5xxl separately. |
③ | Replace the Apply ControlNet VAE node with the Apply Flux ControlNet node. |
④ | Replace the Load ControlNet Model node with the Load Flux ControlNet node. And select the downloaded Flux Canny Controlnet model. |
⑤ | I also used the Empty Latent Image node. This is to show that this node can also use the most basic version besides the SD3 version. |
Personally, I feel that the XLabs version of the Flux Canny Controlnet model works just average.
If your computer performance is not sufficient to run the Flux Canny model, you can also try using Comflowy’s Canny API node in ComfyUI. Of course, you can also directly use the Flux Canny API node in Comflowy, with a simple connection method that requires just one node, and it also supports Flux Pro and Dev versions.
In this chapter, we will introduce how to use Flux Canny Control to generate images with similar edge structures.
This image is generated by AI
You can use Flux Canny Control to generate an image with a similar edge structure to the original image, like the image shown below.
I imported a black and white sketch of the main character from One Piece, and then used the Flux Canny Control model to generate a color image with a similar edge structure to the original image. You can see that except for a slight change in expression, the overall outline is almost the same as the original image.
There are many scenarios that are suitable for using the Flux Canny Control model. For example:
There are two types of Flux Canny models, one is Flux’s official implementation of Flux Canny LoRA, and the other is a Flux Canny Controlnet implemented by an open-source community. The former is based on the Control LoRA technology, while the latter is based on the ControlNet technology.
/models/diffusion_models/
folder. However, because this model includes the Flux Dev model, it is quite large, about 24GB. But the good thing is that you only need to load one model when loading the model.models/loras/
folder. This model file is only about 1.24GB.models/controlnet/
folder. Also, note that this file can be renamed when downloading, renamed as instantx_flux_canny.safetensors
, for easier subsequent use.models/controlnet/
folder.Download the model, and then we will build the Flux Canny ComfyUI workflow. First, you can load the complete Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow.
If you don’t want to manually connect, you can download this workflow template from Comflowy and import it into your local ComfyUI. (If you don’t know how to download Comflowy templates, you can refer to this tutorial.)
Click the Remix button in the upper right corner to enter the ComfyUI workflow mode.
The most significant change is that there is an additional Flux Canny node group. This node group converts the image to Conditioning, and then inputs it together with the Prompt to the Flux model:
First, you need to add an InstructPixToConditioning
node (Figure ①). It is very similar to the Redux workflow. It is also connected to the CLIP-ed FluxGuidance node. Then this node needs to be connected to a Canny
node (Figure ②). Then you need to add a Load Image
node (Figure ③), and input the image you need to process into it.
If you want, you can also add a Preview Image
node (Figure ④) after the Canny
node, for previewing the Canny image.
Finally, input the Prompt, click the Generate button, and generate the image.
Here’s a small tip, if you want to generate an image with a more consistent edge structure than the original image, you can try to set the Low Threshold
and High Threshold
of the Canny
node higher. If you only want the general outline, you can adjust these values smaller, so the Canny image will have fewer edges, and the space for AI to play will naturally be more.
First, because running the ControlNet model will occupy extra GPU memory, I recommend you load the GGUF version Flux workflow in Flux ComfyUI Workflow, and then modify it based on this workflow. Alternatively, go to Comflowy to download this workflow and import it.
This version, you need to:
Apply ControlNet with VAE
node (Figure ①). Then connect it to the Load VAE
, Load ControlNet Model
(Figure ②), and any Canny
node (Figure ③). There is a small detail to note, if you use Canny, it is best to set the strength to 0.7, the effect will be much better.Load ControlNet Model
node selects the downloaded Flux Canny Controlnet model.Get Image Size
node (Figure ④), and connected it to the SD3LatentImage
node (Figure ⑤). This way, the generated image size will be consistent with the input Canny image size.If you don’t want to use the GGUF version, you can also use the FP8 version provided by ComfyUI. Just remove the Load VAE
and DualCLIPLoader
nodes, and replace them with the Load Checkpoint
node. Then select the FP8 version of the Flux model.
After my personal testing, I feel that the InstantX version of the Flux Canny Controlnet model works better. In the original image, it was a frowning Luffy, but in the Flux official Canny model, the frowning Luffy turned into a smiling Luffy. The Canny edge does not work as well as the InstantX version.
Using the XLabs version of the Flux Canny Controlnet model requires using the XLabs developed plugin. This is their Github plugin address. You can install this plugin through ComfyUI’s ComfyUI-Manager. For detailed installation methods, please refer to Install ComfyUI Plugin article.
After installing, you can modify the workflow of the Shakker-Labs version. Alternatively, go to Comflowy to download this workflow and import it.
Modifying it is not too difficult:
Figure | Description |
---|---|
① | Replace the KSampler node with the Xlabs Sampler node. You can see that this node has an additional controlnet_condition input. |
② | I replaced the CLIPTextEncode node with the CLIPTextEncodeFlux node. You don’t need to modify it. I just want to tell you that there are other Flux CLIP nodes. This node can control Clip l and t5xxl separately. |
③ | Replace the Apply ControlNet VAE node with the Apply Flux ControlNet node. |
④ | Replace the Load ControlNet Model node with the Load Flux ControlNet node. And select the downloaded Flux Canny Controlnet model. |
⑤ | I also used the Empty Latent Image node. This is to show that this node can also use the most basic version besides the SD3 version. |
Personally, I feel that the XLabs version of the Flux Canny Controlnet model works just average.
If your computer performance is not sufficient to run the Flux Canny model, you can also try using Comflowy’s Canny API node in ComfyUI. Of course, you can also directly use the Flux Canny API node in Comflowy, with a simple connection method that requires just one node, and it also supports Flux Pro and Dev versions.