King of the Hill Stable Diffusion Style

Driving ControlNet with old Midjourney images using ControlNet depth and canny models in Stable Diffusion. Next, I took the outputs that I liked into img2img with loopback scaler script, denoising was set to 0.2.

Prompt wise I described each character in Midjourney in detail, for Stable Diffusion with ControlNet I used a shorter simplified version of the prompt and added in some more explicit descriptions a little at a time.