top of page

ControlNet + StableDiffusion Rendering Study (WIP)

Pilot

Stable Diffusion is a powerful AI tool that generates images from text descriptions, but achieving precise control over the final output can be challenging. This is where ControlNet comes in. ControlNet allows users to provide additional guiding inputs—such as sketches, depth maps, and material IDs—to shape how the AI renders an image, giving users more customization and predictability.

 

One of the most useful ControlNet features is Line Art Mode, which lets users input a sketch to define the image’s structure. This ensures that Stable Diffusion follows the original composition while still adding AI-generated details. Another key component is the Normal Map, which provides information about surface depth, helping the AI create more realistic lighting and textures. Additionally, Material ID (by Color) assigns different colors to different parts of an image to indicate materials like glass, metal, or fabric, allowing for better material definition in the final output.

 

By combining these tools, users can have more control over their AI-generated images. A sketch ensures the right composition, a normal map improves depth and lighting, and material ID helps define different surfaces. For example, if a designer is creating a futuristic building, they can provide a line drawing for structure, a normal map for realistic shadows, and color-coded material IDs to make windows look glassy and walls appear metallic. Stable Diffusion will then generate a detailed image while maintaining these elements.

 

With ControlNet, AI-generated images become more predictable, customizable, and visually refined, making it a game-changer for artists, designers, and creators.

webui.png
webui contd.png

StableDiffusion WebUI

house corner test2.png

Sample 1

 

Elevation of the immigrant village housing project

Drag the arrow to compare before and after images

bedroom test.png

Sample 2

 

Interior of the post-pandemic housing living unit bedroom

Drag the arrow to compare before and after images

sb1 t.png

Sample 3

 

Exterior of the Six Nation Seedbank

Drag the arrow to compare before and after images

sb8888.png

Sample 4

 

Open Space of the Six Nation Seedbank

Drag the arrow to compare before and after images

ll1o.png

Sample 5

 

Aerial of the immigrant village housing project

Drag the arrow to compare before and after images

More Samples to come...

ll1 sep.png

Side note: By isolating elements in a scene with distinct color zones, the separation feature allows Stable Diffusion to interpret line art more accurately. When paired with ControlNet, this color-coded input helps the model better differentiate objects, apply targeted textures, and maintain spatial coherence—resulting in more precise and semantically consistent renderings.

To be continued...

Zikun An ©2019-2025
bottom of page