Now Reading
How I Used Secure Diffusion and Dreambooth to Create A Painted Portrait of My Canine

How I Used Secure Diffusion and Dreambooth to Create A Painted Portrait of My Canine

2023-04-16 13:29:47

Introduction

Once I first began enjoying with Secure Diffusion text-to-image technology, in August 2022, my rapid response was, “ZOMG! I must make artwork prints for my artwork wall!”. Solely to then instantly face-plant as a result of vanilla Secure Diffusion is sort of difficult to tame. If you’re making an attempt to breed a selected topic, you have to make the most of further methods and methods, none of which existed on the time.

Within the following months, a number of new neighborhood tasks have emerged that goal to offer the AI artist full inventive management over the visible outputs they’re making an attempt to deliver to life. One such approach is LoRA (Low-Rank Adaptation). I explored utilizing LoRA in my posts about Making Self Portraits with Stable Diffusion and Blending Custom Artist Styles.

An much more fashionable approach is Dreambooth, and that’s what we’ll concentrate on for the rest of this weblog publish. I’ll stroll by means of my total workflow/course of for bringing Secure Diffusion to life as a high-quality framed artwork print. We’ll contact on making artwork with Dreambooth, Secure Diffusion, Outpainting, Inpainting, Upscaling, getting ready for print with Photoshop, and at last printing on fine-art paper with an Epson XP-15000 printer.

So with out additional ado, let’s dive in!

What’s Dreambooth?

Dreambooth is a fine-tuning approach for text-to-image diffusion AI fashions. Mainly, that simply means which you can “fine-tune” the already succesful open supply Secure Diffusion mannequin to supply dependable and constant photographs of topics and types which you outline.

Diagram of how Dreambooth works from a high level.
Diagram of how Dreambooth works from a excessive stage.

If you’re on this type of factor, I extremely suggest studying by means of the Dreambooth paper, which you could find right here https://arxiv.org/abs/2208.12242; whereas there are some technical sections, additionally they embody many picture examples which helps you construct an instinct about what is feasible. I discovered the Dreambooth paper to be massively inspirational, and it truly led me to creating this artwork mission and penning this weblog publish; maybe largely as a result of there are a ton of canine picture examples ????. I’m together with a number of photographs from their paper under.

It’s like a photo booth, but once the subject is captured, it can be synthesized wherever your dreams take you… - https://dreambooth.github.io/
It’s like a photograph sales space, however as soon as the topic is captured, it may be synthesized wherever your desires take you… – https://dreambooth.github.io/
5-dog photos in, endless generated images out. The dataset for these lives here: https://github.com/google/dreambooth/tree/main/dataset/dog5
5-dog pictures in, infinite generated photographs out. The dataset for these lives right here: https://github.com/google/dreambooth/tree/main/dataset/dog5

How To Prepare Your Personal Dreambooth Mannequin with Replicate

For this mission and publish, we’re going to coach a Dreambooth mannequin on pictures of my greatest buddy, ???? Queso.

Queso is a really photogenic and cuddly English Cream Golden Retriever, and he’s the goodest boy that’s ever existed, which makes him the proper topic for coaching a customized Dreambooth mannequin!

Quesito Bonito, you are my favrito.

Constructing an Picture Coaching Set

The very first thing you want when coaching a customized Dreambooth mannequin is a “top quality” picture coaching set. I put top quality in quotes as a result of I’ve seen fairly good outcomes with less-than-ideal photographs previously. Nonetheless, the frequent follow is to pick a number of photographs of your topic in a wide range of poses, environments, and lighting situations. The extra selection (in poses, environments, and lighting) you may have of your topic, the extra generalized and versatile your fine-tuned Dreambooth mannequin can be.

Within the paper, they use 3-5 pictures for coaching Dreambooth fashions; however locally, it’s frequent to make use of extra. So in my case, I gathered 40 pictures of Queso in varied poses, lighting, and atmosphere.

I selected to chop out the backgrounds of my photographs since a few of them had been in very related environments, and in early checks, I discovered that these background components started to point out up in my generated photographs. That is very non-obligatory, and I don’t suggest it except you run into points. I used to be ready to do that fairly rapidly in Photoshop with the Object Choice instrument; rapidly choosing Queso, inverting the choice, and deleting the background.

As soon as I had all my photographs, I created a .zip file, and uploaded it to s3, the place I may reference it by URL. That is necessary, becasue we’ll cross this zip file url into the Dreambooth coaching job within the subsequent step.

Beneath is a picture grid of my Queso coaching set. Isn’t he the perfect boy you’ve ever seen?

40 photos of Queso, with the backgrounds removed.
40 pictures of Queso, with the backgrounds eliminated.

Working the Dreambooth Coaching on Replicate

For our Dreambooth coaching adventures, I selected to make use of Replicate (as I did in my previous couple of posts). Replicate is sweet for tasks like this as a result of it minimizes the ache of fumbling with cloud GPUs and manually getting all the pieces put in and arrange. You simply ship an HTTP request with out having to consider GPUs or terminating situations if you’re finished. Replicate has a semi-documented Dreambooth coaching API, which is described on this blog post.

If you’re adventurous and simply wish to dive into the deep finish I’d recommend making an attempt out this set of fast-stable-diffusion google colab pocket book by @TheLastBen: https://github.com/TheLastBen/fast-stable-diffusion. They’ve bought a pocket book for coaching Dreambooth fashions and rapidly spinning up the Automatic1111 Stable Diffusion web interface.

Following the Replicate Dreambooth documentation blog post, I made a fast one-off bash script with some hard-coded inputs.

Beneath I’ve included my queso-1.5.sh bash script, which is simply copy-pasted from the Replicate weblog. Beneath that, I’ve included a breakdown of the assorted parameters I’m utilizing. In case you’re involved in extra superior coaching parameters, you could find the detailed per-parameter documentation right here: https://replicate.com/replicate/dreambooth/api

This script takes ~30-40 minutes to run from begin to end, so that you may wish to take a break and go for a stroll with a furry buddy. Sadly, extra coaching steps imply extra time coaching and 4000 steps is quite a bit.

You’ll discover there’s a mannequin discipline within the JSON request physique. As soon as the coaching job is full, a non-public replicate mannequin can be created at a URL like https://replicate.com/jakedahn/queso-1-5 (I left mine personal, so it should return a 404). As soon as this mannequin is created, you’ll be capable to generate photographs through the Replicate net UI, or through the Replicate API.

#!/bin/bash

curl -X POST 
    -H "Authorization: Token $REPLICATE_API_TOKEN" 
    -H "Content material-Sort: software/json" 
    -d '{
            "enter": {
                "instance_prompt": "a photograph of a qdg canine",
                "class_prompt": "{photograph} of a golden retriever canine, 4k hd, excessive element {photograph}, sharp lens, life like, extremely detailed, fur",
                "instance_data": "https://shruggyface.s3-us-west-2.amazonaws.com/queso-2023-transparent-all.zip",
                "max_train_steps": 4000
            },
            "mannequin": "jakedahn/queso-1-5", # The dreambooth mannequin can be added to your Replicate account at this URL. Change "jakedahn" together with your username...
            "trainer_version": "cd3f925f7ab21afaef7d45224790eedbb837eeac40d22e8fefe015489ab644aa",
            "webhook_completed": "https://abc123.m.pipedream.internet/queso-1-5"
        }' 
    https://dreambooth-api-experimental.replicate.com/v1/trainings

Then I ran it like this:

REPLICATE_API_TOKEN=your-token-here ./queso-1-5.sh

Breaking down the inputs

The inputs on this script outline how the Dreambooth mannequin is skilled, and they’re necessary.

  • instance_prompt : The Occasion Immediate is type of like an instance immediate that you’d use in the event you needed to get a picture of your mannequin topic. The urged format is a [identifier] [class noun]. Curiously, you need the identifier to be a novel “token”. Meaning it ought to be 3-4 letters, and it shouldn’t be a phrase. I’ve heard some of us have particular tokens that work higher for them, I selected qdg.
  • class_prompt: When coaching Dreambooth fashions, you have to present further “Regularization Photographs,” which assist to forestall excessive overfitting. With out these photographs, each picture generated will simply be making an attempt to recreate the precise photographs in your coaching set. By giving the coaching set further “related” photographs, in our case, extra pictures of golden retrievers, the output mannequin can be extra versatile and provides higher leads to extra eventualities. By default, Replicate will generate 50 photographs utilizing your class immediate; I recommend experimenting with extra.
  • instance_data: This can be a zip file containing your entire coaching photographs. Replicate has an API for importing this file to their servers, nevertheless it’s form of sophisticated/concerned, so I simply self-hosted my file on s3, the place I may simply reuse it for future tasks.
  • max_train_steps: That is the variety of coaching steps. Larger is best, type of. I’ve heard a number of conflicting issues about this worth, however essentially the most constant factor appears to be “100 steps for each coaching picture”. So since I’ve 40 photographs, I used 4000 steps. In earlier coaching runs, I used to be getting nice outcomes with 40 photographs and 3000 steps — so that is one thing you’ll wish to experiment with by yourself.
  • trainer_version: The coach model is necessary! There are a handful of choices that you could be wish to experiment with.
    • If you wish to use Secure Diffusion v1.5, use cd3f925f7ab21afaef7d45224790eedbb837eeac40d22e8fefe015489ab644aa
    • If you wish to use Secure Diffusion v2.1, use d5e058608f43886b9620a8fbb1501853b8cbae4f45c857a014011c86ee614ffb
  • webhook_completed: Coaching a dreambooth mannequin takes a while, and getting a notification when it’s full is sweet. I take advantage of a requestbin from https://pipedream.com for this webhook url, which supplies a easy UI for exploring the information that’s despatched to the webhook endpoint:
Screenshot of the Pipedream.com requestbin
Screenshot of the Pipedream.com requestbin

Producing Photographs

Nice! In case you have been following alongside to date, you must have your very personal customized Dreambooth mannequin! Subsequent is the enjoyable half: producing a ridiculous quantity of photographs of your furry buddy.

First, we have to write a handful of prompts, after which we are able to generate a whole lot or hundreds of photographs ????.

Seeing as I’m fairly probably the world’s worst immediate engineer, I went the straightforward route and went for an hour-long stroll down the infinite scroll of Lexica. Lexica is a large assortment of AI-generated imagery, all shared with their prompts. After some time, I picked ten photographs I assumed had been cool from the search time period canine portrait, and copied their prompts. You’ll be able to see this vie

I’m fairly probably the world’s worst immediate engineer.

I collected the next prompts and changed the canine breeds with my token qdg:

PROMPTS = [
    "Adorably cute qdg dog portrait, artstation winner by Victo Ngai, Kilian Eng and by Jake Parker, vibrant colors, winning-award masterpiece, fantastically gaudy, aesthetic octane render, 8K HD Resolution",
    "Incredibly cute golden retriever qdg dog portrait, artstation winner by Victo Ngai, Kilian Eng and by Jake Parker, vibrant colors, winning-award masterpiece, fantastically gaudy, aesthetic octane render, 8K HD Resolution",
    "a high quality painting of a very cute golden retriever qdg dog puppy, friendly, curious expression. painting by artgerm and greg rutkowski and alphonse mucha ",
    "magnificent qdg dog portrait masterpiece work of art. oil on canvas. Digitally painted. Realistic. 3D. 8k. UHD.",
    "intricate five star qdg dog facial portrait by casey weldon, oil on canvas, hdr, high detail, photo realistic, hyperrealism, matte finish, high contrast, 3 d depth, centered, masterpiece, vivid and vibrant colors, enhanced light effect, enhanced eye detail, artstationhd ",
    "a portrait of a qdg dog in a scenic environment by mary beale and rembrandt, royal, noble, baroque art, trending on artstation ",
    "a painted portrait of a qdg dog with brown fur, no white fur, wearing a sea captain's uniform and hat, sea in background, oil painting by thomas gainsborough, elegant, highly detailed, anthro, anthropomorphic dog, epic fantasy art, trending on artstation, photorealistic, photoshop, behance winner ",
    "qdg dog guarding her home, dramatic sunset lighting, mat painting, highly detailed, ",
    "qdg dog, realistic shaded lighting poster by ilya kuvshinov katsuhiro otomo, magali villeneuve, artgerm, jeremy lipkin and michael garmash and rob rey ",
    "a painting of a qdg dog dog, greg rutkowski, cinematic lighting, hyper realistic painting",
]

Then I wrote a brilliant fast/dangerous Python script that iterates by means of every of those prompts ten instances, producing a complete of 100 photographs. I did this many instances… I by no means get sick of AI generated canine artwork.

import os
import urllib
import random
import replicate

USERNAME = 'jakedahn'
MODEL_NAME = 'queso-1.5'
MODEL_SLUG = f'{USERNAME}/{MODEL_NAME}'


mannequin = replicate.fashions.get(MODEL_SLUG)

model = mannequin.variations.checklist()[0] 

def download_prompt(immediate, negative_prompt=NEGATIVE_PROMPT, num_outputs=1):
    print("=====================================================================")
    print("immediate:", immediate)
    print("negative_prompt:", negative_prompt)
    print("num_outputs:", num_outputs)
    print("=====================================================================")
    image_urls = model.predict(
        immediate=immediate,
        width=512,
        peak=512,
        negative_prompt=negative_prompt,
        num_outputs=num_outputs,
    )
    for url in image_urls:
        img_id = url.cut up("/")[4][:6]
        immediate = immediate.substitute(" ", "-").substitute(",", "").substitute(".", "-")
        out_file = f"information/{MODEL_NAME}/{img_id}--{immediate}"[:200]
        out_file = out_file + ".jpg"

        
        os.makedirs(os.path.dirname(out_file), exist_ok=True)

        print("Downloading to", out_file)
        urllib.request.urlretrieve(url, out_file)
    print("=====================================================================")

NEGATIVE_PROMPT = "cartoon, blurry, deformed, watermark, darkish lighting, picture caption, caption, textual content, cropped, low high quality, low decision, malformed, messy, blurry, watermark"


PROMPTS = [
    "Adorably cute qdg dog portrait, artstation winner by Victo Ngai, Kilian Eng and by Jake Parker, vibrant colors, winning-award masterpiece, fantastically gaudy, aesthetic octane render, 8K HD Resolution",
    "Incredibly cute golden retriever qdg dog portrait, artstation winner by Victo Ngai, Kilian Eng and by Jake Parker, vibrant colors, winning-award masterpiece, fantastically gaudy, aesthetic octane render, 8K HD Resolution",
    "a high quality painting of a very cute golden retriever qdg dog puppy, friendly, curious expression. painting by artgerm and greg rutkowski and alphonse mucha ",
    "magnificent qdg dog portrait masterpiece work of art. oil on canvas. Digitally painted. Realistic. 3D. 8k. UHD.",
    "intricate five star qdg dog facial portrait by casey weldon, oil on canvas, hdr, high detail, photo realistic, hyperrealism, matte finish, high contrast, 3 d depth, centered, masterpiece, vivid and vibrant colors, enhanced light effect, enhanced eye detail, artstationhd ",
    "a portrait of a qdg dog in a scenic environment by mary beale and rembrandt, royal, noble, baroque art, trending on artstation ",
    "a painted portrait of a qdg dog with brown fur, no white fur, wearing a sea captain's uniform and hat, sea in background, oil painting by thomas gainsborough, elegant, highly detailed, anthro, anthropomorphic dog, epic fantasy art, trending on artstation, photorealistic, photoshop, behance winner ",
    "qdg dog guarding her home, dramatic sunset lighting, mat painting, highly detailed, ",
    "qdg dog, realistic shaded lighting poster by ilya kuvshinov katsuhiro otomo, magali villeneuve, artgerm, jeremy lipkin and michael garmash and rob rey ",
    "a painting of a qdg dog dog, greg rutkowski, cinematic lighting, hyper realistic painting",
]


random.shuffle(PROMPTS)


for i in vary(10):
    for immediate in PROMPTS:
        download_prompt(immediate)

After working this script many instances, I generated no less than 1000 photographs. I’d say ~20% had been nonsense, and about 80% had been cute, humorous, or correct. These are a few of my favorites:

Screenshot of the Pipedream.com requestbin
Cherry-picked favourite photographs

Discovering THE ONE

Finally, after producing a whole lot of synthetic Quesos, I landed on this one. I really like the colour palette, which is vibrant and contrasty. I really like the feel and the entire superb strains and particulars. It additionally captures Queso’s eyes fairly effectively, which is finally what offered me. Each time I take a look at it, I feel, “dang, that’s Queso!”

Screenshot of the Pipedream.com requestbin
This. That is the one.

Now, the top objective for this artwork mission was to finish up with a high-quality artwork print that I may body and placed on my artwork wall. Whereas cool, this picture wouldn’t make for an excellent artwork print; the awkward crop on the high and backside limits its potential. Additionally, if I had been to print 512x512px at 300dpi, this picture would solely be 1.7x1.7” on paper. I’m focusing on 11x17”.

So the following step on this mission was to repair the awkward cropping. Boy, it’s an actual bummer that we are able to’t simply add new pixels to the highest and backside.

Simply kidding! We are able to! That’s the place Outpainting is available in.

Outpainting

Outpainting is a way the place you possibly can generate new pixels that seamlessly lengthen a picture’s current bounds. This implies we are able to simply generate new pixels on the highest and backside of our picture, to get an entire creative rendition of Queso. So far as I perceive, and I could also be flawed ????‍♂️, outpainting for diffusion models was first implemented by OpenAI.

They’ve a superb instance of their announcement weblog publish, which I’ve included right here.

Unique: Lady with a Pearl Earring by Johannes Vermeer Outpainting: August Kamp × DALL·E, taken from https://openai.com/blog/dall-e-introducing-outpainting

I haven’t used Dall-E 2 a lot, so I needed to offer it a attempt to see the way it did with outpainting. For my part, the OpenAI outpainting person expertise and interface is the perfect I’ve tried; however I wasn’t an enormous fan of the generated pixel outcomes. Right here I’ve bought a bit video snippet the place I added pixels to the highest of my picture, however the entire outcomes had been a bit too cartoony for my liking; additionally they made it appear like Queso was carrying a tiara headband.

❌ Strike 1

Then I attempted utilizing the Automatic1111 Stable Diffusion WebUI from the notebooks I discussed above https://github.com/TheLastBen/fast-stable-diffusion. The Automatic1111 UI is essentially the most full-featured and extensible UI locally, so I figured outpainting would Simply Work™️. I used to be flawed ????. it appeared to take the highest and bottom-most row of pixels and lengthen them down from 512px tall to 1344px tall.

❌ Strike 2!

Screenshot of broken outpainting functionality in the AUTOMATIC1111 UI.
Screenshot of damaged outpainting performance within the AUTOMATIC1111 UI.

Lastly, I attempted utilizing the Draw Things Mac App. I truly actually just like the Draw Issues app. It does a lot of what Automatic1111 does however has a greater UI and runs domestically without cost on an M1/M2 Macbook Professional. Nonetheless, I couldn’t get the outpainting UI to really work ????. So I finally settled on utilizing img2img of my 512x512px picture to a 768x1152px picture.

You might discover that the beginning 512x512px picture right here is barely totally different than those above. That’s as a result of I bought excited and began enjoying with inpainting (which I’ll cowl in additional element subsequent) earlier than increasing the picture. Don’t fear ‘bout it!

Screenshot of DrawThings.app img2img resize.
Screenshot of DrawThings.app img2img resize.
DrawThings.app img2img resize results. 768x1152px.
DrawThings.app img2img resize outcomes. 768x1152px.

This labored fairly effectively! I used to be form of stunned as a result of I assumed img2img would begin producing weirdness within the unfilled house, nevertheless it did the fitting factor right here.

You’ll discover that the hair element within the brow and the neck medallion circle are a bit wobbly and un-detailed. I’m additionally not an enormous fan of how the underside seems like flowers. How on the earth are we going to vary that?

Inpainting, after all!

Inpainting

Inpainting is a way the place you masks out (paint over) a selected number of your picture and substitute it with newly generated pixels. That is type of related and associated to outpainting, nevertheless it’s used for fixing points and including particulars.

The inpainting step is the place I spent most of my time on this mission. Slowly and iteratively choosing small chunks, producing new imagery for these chunks, and inching in the direction of one thing higher. That is the lifetime of an artist, materializing a imaginative and prescient from an concept. Once I paint in actual life (IRL), I do that very same follow with acrylic paints on canvas. Slowly bringing sections of the portray to life.

See Also

I attempted the DrawThings app once more for inpainting, nevertheless it wasn’t working accurately. So I returned to utilizing the Automatic1111 UI for inpainting, which labored very effectively.

Beneath is a screenshot of what the inpainting course of seems like. On the left, you paint over a bunch of locations you wish to modify/change, and then you definately generate many new iterations directly. I really like doing this as a result of it helps you to check tens of instructions in a number of seconds as an alternative of painstakingly testing tens of instructions over the course of hours.

Example of Upscaling in the AUTOMATIC1111 UI. The left shows the options used, the right shows the result.
Instance of Upscaling within the AUTOMATIC1111 UI. The left exhibits the choices used, the fitting exhibits the consequence.

I recorded a bit video of me scrolling backwards and forwards by means of inpainting iterations for Queso’s ears. I can’t assist however really feel like this “generate 20 issues, choose 1” is the brand new foundational workflow for artists on this new AI-capable world.

Closing End result

After a number of hours of enjoying with inpainting, I bought to a spot I used to be joyful to name completed.

I made a neat video displaying the earlier than/after visualization of this AI-based post-processing journey. I really like how the cartoony items of the unique picture had been changed with extra fur-like components; whereas maintaining a constant coloring scheme. I find it irresistible!

However this picture is simply 640x1280px; it must be a lot bigger. EMBIGGEN!

That’s the place Upscaling is available in.

Upscaling

Upscaling is only a fancy strategy to “blow up” a picture. Taking a small picture and making it a lot bigger. That is tremendous useful for high-quality prints, the place you print at a 300dpi decision — that’s numerous pixels! Generally, it seems like I’m clicking the “ENHANCE” button ????

I like to make use of the Actual-ESRGAN 4x+ upscaling mannequin. Within the screenshot under, you possibly can see I 6x’d the picture. To do that, you’ll want a GPU with numerous VRAM. I ran this in google colab on an Nvidia A100 with 40GB VRAM. Once I’m not working in a colab pocket book, I’ll often use this Replicate mannequin https://replicate.com/nightmareai/real-esrgan

Example of Inpainting. The black on the left side was me selecting areas to re-generate. The right shows the results.
Instance of Upscaling. The black on the left aspect was me choosing areas to re-generate. The correct exhibits the outcomes.

I attempted a handful of settings, however the settings within the picture above are what I ended with. Curiously, not all settings are made equal! Within the following picture, I show three upscaled photographs aspect by aspect. Every of them has various levels of compression and lack of element. They’re particularly noticeable across the eyes, nostril, and fur.

Side-by-side comparison of 3 upscaler results, with varied settings.
Facet-by-side comparability of three upscaler outcomes, with various settings.

After upscaling, the Queso portrait was 2888x3835px, which is 9.63x12.78” at 300dpi. Nonetheless, I’m focusing on a visual house of 11x17” and printing on 13x19” paper.

So the ultimate step is to arrange all the pieces for print.

Making ready for Print

Within the photographs above, you might have seen bizarre compression artifacts and textures within the inexperienced gradient background. I need simply to delete all of that and place it on a strong gradient.

First, I used the Object Choice instrument to rapidly choose Queso, then I inverted the choice and deleted the background.

Using the `Object Selection` tool to remove the messy green background behind Queso.
Utilizing the Object Choice instrument to take away the messy inexperienced background behind Queso.

Then I may resize and place Queso till he was completely positioned for the framing I had in thoughts.

I despatched this closing picture to the printer, however I’ve scaled it right down to ~50% measurement for this weblog publish. At full decision, it’s 13x19” @ 300dpi, which is 3900x5700px.

Final Image, which I printed.
Closing Picture, which I printed.

Printing

To print, I used an Epson XP-15000, the perfect dwelling printer I’ve ever used. It’s ~$400 on Amazon, and the ink is pricey, however the outcomes are skilled grade.

One other vital facet of printing a high-quality artwork print that’s supposed to be framed is the paper. Epson has tremendous high-quality Velvet Fine Art Paper, which the printer is calibrated to print on. The outcomes are gorgeous.

Print-in-progress interior view of the Epson XP-15000, making the Queso magic happen.
Print-in-progress inside view of the Epson XP-15000, making the Queso magic occur.
Nearly complete print-in-progress. Can you feel the excitement?
Practically full print-in-progress. Can you’re feeling the joy?

Closing Ideas

This mission is one thing I’ve needed to do because the day I first toyed with Secure Diffusion. The tempo at which the neighborhood is creating new instruments and methods is astounding.

I nonetheless really feel like there may be numerous lacking inventive management whereas making artwork with Secure Diffusion, however the future is vivid! Secure Diffusion hasn’t even been out for a full 12 months but, and it is changing into more and more apparent to me, by means of tasks like this, that it’s going to be a game-changer for artists of all backgrounds.

The extra I play with it, the extra excited I get about it. I can not wait to see what the long run holds for Secure Diffusion, and the world of text-to-image generative artwork.

Beneath I’ve included expanded pictures of my artwork wall, together with my bestest pal ???? Queso ????. The pictures don’t do it justice. The precise element in print is best than I’d have guessed doable.

Close-up shot of the Queso print in a 11x17” Ikea frame.
Shut-up shot of the Queso print in a 11×17” Ikea body.
Wider view of where Queso fits in with the rest of the art wall.
Wider view of the place Queso suits in with the remainder of the artwork wall.

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top