carefree0910 / carefree-creator

AI magics meet Infinite draw board.

Home Page:https://creator.nolibox.com/guest

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use anime model with API

XplosiON1232 opened this issue · comments

Hello!

I'm glad you got the colab to work again with the sd.base, however, I noticed the results when using for example:

{
    "text": "anime girl, high quality",
    "is_anime": true
}

is worse than using the same prompt on https://creator.nolipix.com/guest but with the Anime model.

So I guess it's not the same model? How can I use the Anime model for the API? Maybe this is too much for the colab again and we will run out of RAM, or do you think it's possible? Thank you!

I tried using --focus sd.anime as described in the docs (wiki) here

However it gave this error:

Error: Invalid value for '--focus': 'sd.anime' is not one of 'all', 'cv', 'sd', 'sd.base', 'sd.inpainting', 'sync', 'control', 'pipeline'.

By the way, I also want to be able to upscale the anime pictures I generate (using the API), how can I do this? When upscaling at the nolibox website, there's an option for normal upscaling and anime upscaling, where I want to use anime upscaling using the API. Thank you! (And sorry for spamming your inbox!!)

Haha never mind and let me answer your questions one by one.

  1. In fact, what you've done is correct: simply add "is_anime": true will do the work. The reason why creations from the website is better is, we use the <easynegative> as the negative_prompt as default there. You can inspect this behavior by selecting the generated image and then expanding the AI Parameters tab at the bottom right corner, as shown below:
image

In order to re-create the same behavior, here's an example:

{
    "text": "anime girl, high quality",
    "negative_prompt": "<easynegative>",
    "seed": 1001,
    "guidance_scale": 7.5,
    "sampler": "k_euler",
    "is_anime": true,
    "custom_embeddings": {
        "<easynegative>": x.tolist()
    }
}

Here, x is the textual inversion embedding vector of the <easynegative>. For your convenience, you can directly use the 2-dimensional list contained in this JSON file:

easynegative.json

If everything works fine, the image generated by the API with these parameters should be exactly the same as the selected image shown above!

  1. Oops, seems that the wiki is outdated, ignore it and focus on what I've said above!

  2. The upscaling endpoint is available in the --focus sync mode, and unfortunately, the Colab RAM cannot afford to load models from both sync mode and sd.base mode. 😣

You may do the trick if you can launch two Colab notebooks at the same time, one for the sync mode and the other for the sd.base mode. Then, you can use the /img2img/sr endpoint from the sync mode to upscale the images generated by the /txt2img/sd endpoint from the sd.base mode.

Hey! I haven't tried yet, but I'm not quite sure what I should do with the easynegative.json file? Should I download it and put it in the Colab in any way? Rename it to x or how does it know what x is? I'm sorry if this is a dumb question, and thank you very much :)

Oh, I made it work by just copying the entire contents of the json file and replaced "x.tolist()" with that. I guess this was what you meant? Well it works and I'm happy. Thank you very much. (The body for posting is pretty long tho haha but it works great!)

Oh, I made it work by just copying the entire contents of the json file and replaced "x.tolist()" with that. I guess this was what you meant? Well it works and I'm happy. Thank you very much. (The body for posting is pretty long tho haha but it works great!)

Haha yes, this is the correct way to do it!

Yeah, the body is quite long but since it enables you to use textual inversion on the fly so I think it's worth it! 😆