In this chapter, we will introduce how to use Flux Redux to generate similar images.
This image is generated by AI
/models/style_models
directory. Note that it is the style_models directory, not the diffusion_models directory.
In addition to this model, you also need to download a model named sigclip_vision_patch14_384
. This model is used by Flux to convert images to Conditioning. You need to go to here to download this model, and place it in the /models/clip_vision
directory.
Apply Style Model
node (Figure ①). Then this node’s output is Conditioning, so it needs to be connected to the BasicGuider node.
Then the Apply Style Model
node has the following input nodes:
Load Style Model
node (Figure ②), then add it to the workflow, and then connect it to the Apply Style Model
node, and select the Flux Redux model we downloaded earlier.Clip Vision Encode
node (Figure ③), then add it to the workflow, and then connect it to the Load CLIP Vision
node (Figure ④) and the Load Image
node (Figure ⑤).Clip Vision Encode
node in Figure ③ has an additional crop
parameter. What does this mean? It crops your image, and then only encodes the cropped image. If you choose Center, it means it will crop the center part of the image. Why is it necessary to crop? Because the Redux model can only receive square images. So your input image should be square, otherwise, the generated image may not contain all the elements in the image.sigclip_vision_patch14_384
model to convert the image to Vision (simply understood as a set of word vectors), then using the Redux model to convert the Vision data to Conditioning (simply understood as converted to a vector that Flux can understand), then inputting it together with the Prompt to the Flux Redux model, and finally inputting the output of the Flux Redux model and the original image to the Flux model to generate the image.
You can simply understand this process as using the Flux Redux model as a translator. It translates the various elements in the image into Conditioning data that Flux can understand, and then inputs it into the Flux model to generate the image.
Batch Images
node (Figure ①), and then connect it to the second Load Image
node (Figure ②).
Then select the two images you want to fuse, for example, I fused a sofa and a forest image together, and you will get a very interesting image.
Apply Style Model
node. Replace it with the Redux Advanced
node in Figure ①.downsampling_factor
parameter of the Redux Advanced
node in Figure ①. The larger the value, the lower the weight of the Image Prompt. 3 is roughly medium strength. You can test the effect of different values according to your situation.downsampling_factor
parameter, you can also adjust the downsampling_function
parameter. You can switch to different parameters to try.Mask
terminal of the Load Image
node to the Mask
terminal of the Redux Advanced
node, and then circle some areas:
Figure | Description |
---|---|
① | I added a ConditioningSetTimestepRange node, and then for better distinction, I named it Prompt Conditioning . Then connect it to the FluxGuidance node. And set start to 0, end to 0.1. This means that during the final generation process, the proportion of Text Prompt will reach 10%. If you want the Prompt weight to be higher, you can set the end value to be larger. |
② | Add another ConditioningSetTimestepRange node, and then for better distinction, I named it Image Conditioning . Then connect it to the Apply Style Model node. And set start to 0.1, end to 1. This means that during the final generation process, the proportion of Image Conditioning will reach 90%. Note that the start value here must be greater than the end value of Prompt Conditioning. |
③ | Add a Conditioning Combine node, then connect it to the Prompt Conditioning and Image Conditioning nodes. Note that Prompt Conditioning is the first input. |
④ | Then I filled in “running shoes” in the Text Prompt. |
⑤ | The generated image is mainly running shoes, but the style is still the Lego style of the input image. |
Prompt Conditioning
and Image Conditioning
. You can adjust them according to your needs.