Guest User

Untitled

a guest
Jan 9th, 2022
234
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.55 KB | None | 0 0
  1. Original install information is inside the other README file and still relevent. This is my mf blog, and some extra information.
  2. If you need help then I'll explain in my words.
  3.  
  4. You need to install Python, get Chrome webdriver that matches your Chrome version (get Chrome) you can check chrome version in settings about chrome, put the webdriver into the root folder where main.py is, then hit the install.bat. Then open command prompt, type (all without quotes) the drive letter you need example: "e:" then hit enter. Then type "cd " with a space and paste the location of the folder with main.py in it, hit enter again.Finally type "python main.py" and hit enter. You obviously need at least one url in the url.txt file for it to actually do anything.Open the run.bat file with right click edit too and enter the name of the appropriate folder where instructed to. it should be the folder where your desktop and personal files are. It'll make your life easier instead of constantly opening cmd.
  5.  
  6. Just a heads up, if you get a warning about some bluetooth thing not working in the terminal just ignore it. Above that it should tell you to press enter after you've logged in, don't close the browser window until you've pressed enter twice I think using the window that appears with FAKKU to generate your one one and only cookies.ntr file. Get cucked nerd. also if something isn't working it's probably an out of date chromedrive.exe, update that shit. Or it's just broke. What can you do?
  7.  
  8. one line version of what this can do that the other can't:
  9.  
  10. -Optimised images(saves up to 3-4mb per page at times)
  11. -Automatically create metadata files(ComicInfo and json)
  12. -Automatically packaged into cbz archives and sorted into magazines, collections ,singles or tags
  13. -Protection against botched downloads
  14. -Collections, magazine issues, whole ass magazine collections and tags(kogal, glasses etc.). That includes the entire Ultimate catagory.
  15. You can't download
  16. -Metadata only mode(Non FAKKU Ultimate doujins will fetch metadata automatically)
  17.  
  18. And that's it. the rest is just a bit more in depth info and some bugs stuff.
  19.  
  20. One last thing before the meat. Before you download anything look inside the downloader.py file first. I left a bunch of settings for you to configure at the very top where it says GENERAL USER OPTIONS all fancy and shit. I wanted to include more but it was already getting fucking tedious as it is now. My code is like box of christmas tree lights so yeah.
  21.  
  22.  
  23. *******
  24. ABOUT
  25. *******
  26. Sup anons. This is an independantly modifified version of some downloader for FAKKU Ultimate! doujins I found on github.
  27. The original version was kind of lacking imo so I spent some time using this to learn how to code in python instead of *REDACTED*. If you're wondering why I'm not branching a fork or some something, It's because I don't know how to or care to. In the end I made this for myself to use and prove to myself that I could.
  28.  
  29. The original version took massively bloated 32-bit color screenshots (like your mom), whether it was appropriate or not, and you never need that much information in a doujin page. It wasn't so bad with color pages but it is complete and utter overkill for black and white pages. My version converts and optimizes the images as they are being collected. With color images you can save somewhere in the ballpark of 1mb per image, but with many black and white pages you can save up to around 3 - 4mb per page. Holy *REDACTED*. Those savings do depend the page landing inside PNG8 territory though.
  30.  
  31. My version can also automatically download metadata in a ComicInfo.xml file, a .json with the name of your choosing, both or neither! By default I've left it to create both but theres 2 parameters that I've set near the top of the downloader.py file for you to choose from. I included some nice general metadata features too. If you enter a url for a non unlimited doujin it will automatically download the metadata for it and place it into a metadata folder in the root. There's a metadata only mode if you just want a load of metadata for some reason. And it might be a bit niche, but I created an alternative metadata mode for magazines with Komga. I'm warning you right here, turn it OFF if you don't need it. It's indiscriminate it'll do that with any link you feed it until you turn it off.
  32.  
  33. This version also automatically packs the loose png files and metadata up into a cbz archive. I debated on whether to give an option for different file extensions or folders but if you don't use cbz then *REDACTED*. Also all cbz files are named with the same naming scheme as the big collections on nyaa automatically. [ARTIST] TITLE (MAGAZINE ISSUE). There are some options though and a [ARTIST] TITLE for folders, that would be [ARTIST] SERIES for collections though.
  34.  
  35. I'm pretty sure the original script had a way to download collections. I don't know how and I still haven't tried to find out. Maybe I'm just an idiot but it was probably some *REDACTED*. Anyway my version will fetch doujins within context up to a Magazine level, you can download Fakkus entire magazine collection just by feeding it the magazine landing page url. You can download an entire tags section like dark skin or *REDACTED*. You can even enter the Unlimited tag url and download the entire unlimited library with one link, and keep it up to date just by not moving anything out of the library folder. You can download individual collections and single doujin urls too.
  36.  
  37. *******
  38. NOTE
  39. *******
  40. Magazine issues, collections/series and individual doujin urls are all treat as seperate entities. I added in a lot of conveniences and safety nets into this thing so that you could just leave it running, but this wasn't one of them: So long as you have block repeat downloads enabled and keep hold of the done.txt file then the same doujin won't be downloaded again
  41.  
  42. *******
  43. BUGS
  44. *******
  45. There are still some functions within this script that I have no idea about what they do, but I can say with 100% certainty that if you break this it will almost definitely be because you were trying to. It is in a better place now than it was before. Just don't look too hard at the code, if it doesn't make you laugh you'll cry. Anyway here's some bugs. One Legacy and two I think I introduced.
  46.  
  47. ~~~~~~~
  48. -THE WAIT PARAMETER
  49. ~~~~~~~
  50. In the downloader.py file there's a parameter named "WAIT". This tells the script how long to wait before taking the screenshot after the doujin page url has been passed to the function that needs that information. By default it was set to 2 seconds and I think this was the sweetspot. Start going any faster than this and you start running the risk of getting incomplete pages and ruined files.
  51.  
  52. As a side note on incomplete pages if the downloader stopped mid progress there was a chance the current page would be botched and you'd have to go and delete it yourself. That's fixed. It will automatically just delete the last page it was on and start again from there.
  53.  
  54. ~~~~~~~
  55. THE DONE.TXT FILE
  56. ~~~~~~~
  57. If you have block repeat downloads enabled and clear temp files disabled then delete the done.txt and switch out the first url before the script finished last time. That first doujin likely WILL be messed up. So don't do that. Okay?
  58.  
  59. ~~~~~~~
  60. DUPLICATE URLS
  61. ~~~~~~~
  62. If you paste the same url in the url.txt file twice, as far as I know it will download the pages again. Hell if I know I'm getting tired of this *REDACTED by now so just don't do it timmy.
  63.  
  64. ~~~~~~~
  65. CHAPTER NUMBERS
  66. ~~~~~~~
  67. One extemely minor and unavoidable... thing? Is that because FAKKU is a *REDACTED* and constantly removing chapters and doujins the chapter numbers in the ComicInfo file can be off if there's any missing chapters. It's less a bug and just an unfortunate result of the way FAKKU's collection system works. Another annoyance to be aware of is that if a doujin gets a sequel later down the line the previous chapter won't be updated to reflect that.
  68.  
  69. ~~~~~~~
  70. CAN'T TEST EVERYTHING
  71. ~~~~~~~
  72. One of the last things' I tried before finishing up this little project was to try and grab some random garbage western comic ("Hej Hej Monika" if you care) and yeah, the images were broken before and after I made my changes. I have the smoothest brain of anyone you've ever met so I have no idea how to fix the issue. I don't know if it's the only one this happens to but it's the only one it's happened to for me.
  73.  
  74.  
  75. Anyway without some of the snark that's pretty much everything I've got to say on this. I hope whoever uses this finds it to be helpful and of a high enough quality.
  76.  
  77. I wish you all swift downloads and even swifter dominant hands. Peace.
  78.  
  79.  
  80.  
Add Comment
Please, Sign In to add comment