Guest User

AI Art

a guest
Apr 7th, 2023
280
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.68 KB | None | 0 0
  1. USING STABLE DIFFUSION
  2.  
  3. ✧ AI components installation guide and links ✧ https://www.techspot.com/guides/2590-install-stable-diffusion/
  4. Github download link foranything-v3.0: https://huggingface.co/Linaqruf/anything-v3.0/tree/main
  5. Model to create outpainting/ inpainting: https://huggingface.co/runwayml/stable-diffusion-inpainting creating a background around a small picture or inside it
  6. Model to create new pictures based on your character: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion creating your own mini model that you can reference in a prompt
  7.  
  8. ✧ Understanding AI settings ✧
  9. General settings: https://youtu.be/Z3IHmdqUar0 what each setting does (for txt2img/img2img pages)
  10. UI Sampling methods: https://youtu.be/Oq_YUIBFewg (DDIM/Euler etc) what are the differences
  11.  
  12. ✧ Prompts/Tags for image creation ✧
  13. Tag Groups Wiki | Danbooru: https://danbooru.donmai.us/wiki_pages/tag_groups - useful also to click on any image or search by tag, then understand how tags work
  14. Tag Effects | Pastebin.com: https://pastebin.com/GurXf9a4 explains some prompts/tags and how and how heavy they can change things the AI than generates
  15. NovelAI Anime Girl Prompt Guide: https://lunarmimi.net/freebies/novelai-anime-girl-prompt-guide also includes poses
  16. More promt examples: https://youtu.be/lFI8JQvPfu8
  17. . VAE: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors does help make pictures vibrant and less saturated, and also solve weird spot issues
  18.  
  19. Note on prompts: The AI ​​Prompt cannot take unlimited arguments. You might think so, since there is no character limit, but some arguments are just ignored after a certain length.
  20. The prompt can contain up to 75 tokens. You can calculate how many tokens make up your chain of arguments on the following website: https://beta.openai.com/tokenizer
  21.  
  22. In case if u dont have god graphics card: comment by @Azu: if you or others ever want to experiment but don't have a great graphics card you can hire a GPU with a cloud service for very very cheap: https://www.runpod.io/blog/stable-diffusion-ui-on-runpod 50 cents/hr for a 3090. There are also 'distilled' models releasing soon that will run much faster and might be better for older GPUs
  23.  
  24. ✧ Useful to know/think about: ✧
  25. Art jargon and terms for your prompts (as example): https://youtu.be/46uVD97bkHQ (photorealistic / steampunk / intricate / render / high contrast)
  26. Fundamental Art Styles: https://youtu.be/aBYrgvE6fc4 some hints to understand what outcome to expect using art style prompts (indian / renaissance / baroque / rococo etc)
  27. Historical artists: https://youtu.be/AGZFUcsehJQ for understanding art styles
  28. .
  29. ✧ Setting notes while creating an image ✧
  30.  
  31. - Check image dimensions setting - Width/Height - or image will be squished
  32. - How'd you get that nice mature face instead of lime anime waifu face? - I'm using DDIM cause of that, if u generate a image with the same seed on DDIM and Euler, Euler will give a less mature image,
  33. more kid like image, DPM++ SDE also generate nice mature faces, but it takes a lot to run, so I'm rocking DDIM, and adding {{teen}}, {{kid}} to the negative prompts
  34.  
  35. Note from @kcnkcn for people seeing this through pins, i suggest you scroll up to see another take on settings.
  36. there is no right solution and maybe other settings work better for you
  37. so here's my take on your settings:
  38. - i recommend Euler (not A) as a beginner because it's pretty consistent with it's outputs. A has a lot of variation, so as a beginner it might be difficult to understand where you went wrong
  39. - iirc, euler doesnt change tooo much if you increase sampling steps. Unless you really want very detailed results, 30-50 is good because of generation speed. fun fact, NovelAI uses Euler at 28
  40. - for denoising strength, i recommend you start high (maybe 0.4-0.6). if you like an image, drag it back into the img2img with a lower denoising strength for fine tuning
  41. - personal preference, but i prefer random seed (click the die/dice) so you get a little more variation with your results and you dont get almost literally the same image every time
  42. - increase batch count and batch size to increase the number of results per generation attempt. itll allow you to see different results, at a cost of generation time
  43. - anythingv3 is strongly trained on danbooru tags, so visit the danbooru website, ignore all the r34, and figure out what tags are used. if you want a helping hand, click the "interrogate danbooru" button and itll return some tags that it thinks your input is using.
  44.  
  45. type the tags in the format of tag1, tag2, tag3, .
Add Comment
Please, Sign In to add comment