Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- USING STABLE DIFFUSION
- ✧ AI components installation guide and links ✧ https://www.techspot.com/guides/2590-install-stable-diffusion/
- Github download link foranything-v3.0: https://huggingface.co/Linaqruf/anything-v3.0/tree/main
- Model to create outpainting/ inpainting: https://huggingface.co/runwayml/stable-diffusion-inpainting creating a background around a small picture or inside it
- Model to create new pictures based on your character: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion creating your own mini model that you can reference in a prompt
- ✧ Understanding AI settings ✧
- General settings: https://youtu.be/Z3IHmdqUar0 what each setting does (for txt2img/img2img pages)
- UI Sampling methods: https://youtu.be/Oq_YUIBFewg (DDIM/Euler etc) what are the differences
- ✧ Prompts/Tags for image creation ✧
- Tag Groups Wiki | Danbooru: https://danbooru.donmai.us/wiki_pages/tag_groups - useful also to click on any image or search by tag, then understand how tags work
- Tag Effects | Pastebin.com: https://pastebin.com/GurXf9a4 explains some prompts/tags and how and how heavy they can change things the AI than generates
- NovelAI Anime Girl Prompt Guide: https://lunarmimi.net/freebies/novelai-anime-girl-prompt-guide also includes poses
- More promt examples: https://youtu.be/lFI8JQvPfu8
- . VAE: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors does help make pictures vibrant and less saturated, and also solve weird spot issues
- Note on prompts: The AI Prompt cannot take unlimited arguments. You might think so, since there is no character limit, but some arguments are just ignored after a certain length.
- The prompt can contain up to 75 tokens. You can calculate how many tokens make up your chain of arguments on the following website: https://beta.openai.com/tokenizer
- In case if u dont have god graphics card: comment by @Azu: if you or others ever want to experiment but don't have a great graphics card you can hire a GPU with a cloud service for very very cheap: https://www.runpod.io/blog/stable-diffusion-ui-on-runpod 50 cents/hr for a 3090. There are also 'distilled' models releasing soon that will run much faster and might be better for older GPUs
- ✧ Useful to know/think about: ✧
- Art jargon and terms for your prompts (as example): https://youtu.be/46uVD97bkHQ (photorealistic / steampunk / intricate / render / high contrast)
- Fundamental Art Styles: https://youtu.be/aBYrgvE6fc4 some hints to understand what outcome to expect using art style prompts (indian / renaissance / baroque / rococo etc)
- Historical artists: https://youtu.be/AGZFUcsehJQ for understanding art styles
- .
- ✧ Setting notes while creating an image ✧
- - Check image dimensions setting - Width/Height - or image will be squished
- - How'd you get that nice mature face instead of lime anime waifu face? - I'm using DDIM cause of that, if u generate a image with the same seed on DDIM and Euler, Euler will give a less mature image,
- more kid like image, DPM++ SDE also generate nice mature faces, but it takes a lot to run, so I'm rocking DDIM, and adding {{teen}}, {{kid}} to the negative prompts
- Note from @kcnkcn for people seeing this through pins, i suggest you scroll up to see another take on settings.
- there is no right solution and maybe other settings work better for you
- so here's my take on your settings:
- - i recommend Euler (not A) as a beginner because it's pretty consistent with it's outputs. A has a lot of variation, so as a beginner it might be difficult to understand where you went wrong
- - iirc, euler doesnt change tooo much if you increase sampling steps. Unless you really want very detailed results, 30-50 is good because of generation speed. fun fact, NovelAI uses Euler at 28
- - for denoising strength, i recommend you start high (maybe 0.4-0.6). if you like an image, drag it back into the img2img with a lower denoising strength for fine tuning
- - personal preference, but i prefer random seed (click the die/dice) so you get a little more variation with your results and you dont get almost literally the same image every time
- - increase batch count and batch size to increase the number of results per generation attempt. itll allow you to see different results, at a cost of generation time
- - anythingv3 is strongly trained on danbooru tags, so visit the danbooru website, ignore all the r34, and figure out what tags are used. if you want a helping hand, click the "interrogate danbooru" button and itll return some tags that it thinks your input is using.
- type the tags in the format of tag1, tag2, tag3, .
Add Comment
Please, Sign In to add comment