Newsletter

ControlNet’s Reference Only: Simplifying Image Generation with a Single Reference Image

ControlNet introduces a new feature called Reference Only, which simplifies the training workflow for small models like LoRA. With just one reference image, users can control the style of the generated image. This feature eliminates the need for AI models and allows for easy operation by inputting a single reference photo.

By directly linking to Stable Diffusion Feature Layers, Reference Only can be used with any image as a reference to control the style of the generated image. This feature offers convenience and ease of use to users.

The author demonstrates the effects of Reference Only by showcasing different reference images. For instance, using a reference photo with smoky makeup produces a generated photo with the same makeup effect. Switching to a reference photo with a clear background style changes the background of the generated photo accordingly. Different reference images, such as those of characters from games or cartoons, also affect the expressions and overall tone of the generated images.

To install and use Reference Only, users need to update ControlNet to version 1.1.153 or newer. The process involves going to the Extensions page of the Stable Diffusion WebUI, checking for updates, installing the update, and restarting. After restarting, users can upload the desired reference image and enable the Reference Only feature in the ControlNet settings. The Control Mode can be adjusted based on the importance of prompt words and ControlNet. Increasing the Control Weight can make the style control more noticeable.

In conclusion, Reference Only is a valuable tool for controlling the style of generated images through a single reference image. Its integration into ControlNet enhances the image generation workflow, particularly for small models. By following the provided instructions, users can easily utilize this feature and achieve the desired style in their generated images.

Reference Only is a new function integrated in ControlNet. Only one reference image is needed to control the style of the generated image, which can simplify the training workflow of small models such as LoRA.

Follow the style of the reference image

According to the latest information from ControlNet on the GitHub website, Reference Only can be linked directly to Stable Diffusion Feature Layers, and any image can be used as a reference to control the style of the generated image.

The biggest feature of Reference Only is that it does not need to rely on any AI model, and it can operate directly by inputting a single reference photo, which is quite easy to use.

The author first lists the Reference Only effect display below.The theme of the generated image is also Dr. Takemi in the game “Persona”, but with different reference images, you can control the style of the characters in the generated image .

▲ First, we borrow the photo taken by 生ごミカン as a reference, and we can see that the doctor who produced the photo has the same obvious smoky makeup as the original photo.

▲ Then switch to a reference photo with a clear background style, and the background of the generated photo will also change accordingly.

▲ Let’s use the picture of the smartphone app “Stones Gate Clock” as a reference, and the expressions of the characters and the general tone of the picture produced will also be affected.

▲ Let’s see what happens when you enter “Sonic Boy” as a reference, the results are not very obvious.

▲ How about Shizuka from Doraemon? Mmm, it works!

▲ Finally, try Feili from the “Magic Bubble” series Although the style is a bit similar to the picture above, you can still see the difference if you look closely.

Installation and use Reference Only

ControlNet integrates Reference Only in the version 1.1.153 update, so if readers are installing ControlNet from scratch, they should get this version. If ControlNet is installed, you can go to the Extensions page of the WebUI Stable Diffusion, click “Check for updates” under the Installed tab to check the update of the plug-in function, and click “Apply and restart UI ” to install the update and reboot.

After restarting the Stable Diffusion WebUI, click the triangle on the right side of ControlNet to expand its installation items, then click “Click to upload”, select the reference image you want to import, then click the “Enable” box in order , and then select reference_only in Preprocessor.

Then set the Control Mode, where Balance means balancing the weight of ControlNet and prompt words, My prompt is more important means that prompt words are more important, and ControlNet is more important means that ControlNet is more important. Image generation can then proceed as normal.

It should be noted that according to the author’s test experience, a higher weight is often needed on Reference Only to make the effect more noticeable. Therefore, if the style control is not as expected, you can increase the Control Weight in ControlNet (the upper limit is 2), and select ControlNet more important in Control Mode, so that Reference Only can further control the style of the image.

▲ If you need to update ControlNet, you can go to the Stable Diffusion WebUI Extensions page, click “Check for updates” under the Installed tab to check the update of the plugin function, and click “Apply and restart UI” to install the update and reboot.

▲ When using Reference Only, you need to click the triangle on the right side of ControlNet to expand its installation items and then click “Click to upload” to select the reference picture you want to import.

▲ Then click on the “Enable” box respectively, and then select reference_only in Preprocessor. After that, you can adjust the Control Weight and Control Mode according to your needs to change the control weight.

▲ After the installation is completed, you can operate in the usual way, and click the “Generate” button to start generating images.

Direction Only provides a convenient way to enable readers to control the style of generated images through a single image, without having to train small models such as LoRA themselves, which can simplify the workflow of generating images of a specific style.

(Back to series article directory)

#Stable #Diffusion #Mapping #Manual #Control #painting #style #Reference #Kebang