Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- '''
- [(Latin - Italian) | Website archiver]
- Website: 'http://www.latin.it/'
- Functional on: 2018-09-12
- Copyright (c) 2018 <Iseefloatingstufftoo>
- Permission is hereby granted, free of charge, to any person
- obtaining a copy of this software and associated documentation
- files (the "Software"), to deal in the Software without
- restriction, including without limitation the rights to use,
- copy, modify, merge, publish, distribute, sublicense, and/or sell
- copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following
- conditions:
- The above copyright notice and this permission notice shall be
- included in all copies or substantial portions of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
- OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
- HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
- WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- OTHER DEALINGS IN THE SOFTWARE.
- '''
- import urllib2
- from bs4 import BeautifulSoup
- import os
- '''
- The function crawlPage looks if the current url contains .lat, which only pages do with latin and the italian translation(s) on it,
- and if it is on such a page, it will parse the latin and italian, and save it in separate files.
- On every page the crawler goes over, it collects all links to other pages that contain 'autore/', as all pages we are interested in
- contain 'autore/' in the link, as those pages have links to authors, works, chapters, and texts themselves. We then visit all the pages
- we have not visited yet that are deeper in the folder structure, until a maximum search depth of 8 has been reached.
- '''
- def crawlPage(URL, COUNT):
- # if we have clicked already 8 urls to end up on this page, we stop. This is a safety measure so that we don't end in a long link chain leading nowehre.
- if (COUNT > 8):
- return
- else:
- #there are some urls with spaces in them for some reason. These are faulty and will crash the program, so we check if there are no spaces in the current url
- if ' ' not in URL:
- #we display the current page we are in to the console, and load the website data.
- print URL
- response = urllib2.urlopen('http://www.latin.it/' + URL)
- html = response.read()
- soup = BeautifulSoup(html, 'lxml')
- #we check if this is a page with italian/latin text on it.
- if '.lat' in URL:
- name = URL[1:]
- store = URL[1:URL.rfind('/')]
- #we create the directory structure the url indicates so that the texts remain sorted
- if not os.path.exists(store):
- os.makedirs(store)
- #we find the body of text, which contains both the italian and latin
- body = soup.find('div', {'onselectstart' : 'return(false);', 'style' : 'user-select:none;-moz-user-select:none;-khtml-user-select:none;'})
- #here we filter out the latin
- lat = body.find('div', {'style' : 'text-align:justify;'}).text
- #...and write it to a file
- file = open(name + '.txt', 'w+')
- file.write(lat.encode('utf-8', errors='ignore'))
- file.close()
- itCounter = 1
- #here we find each italian translation and write it to a file.
- for it in body.find_all('div', {'class' : 'tdbox', 'style' : 'text-align:justify; color:#004ea2;'}):
- file = open(name + '.it_' + str(itCounter) + '.txt', 'w+')
- file.write(it.get_text().encode('utf-8', errors="ignore"))
- file.close()
- itCounter += 1
- return
- #here we find all links containing 'autore/', and if we did not go to that page yet, we will and run this entire function on that page again.
- for link in soup.find_all('a'):
- t = link.get('href')
- if t is not None:
- if 'autore/' in t:
- if (URL in t and len(t) > (len(URL) + 1)):
- crawlPage(t, COUNT + 1)
- #Here we start the code
- crawlPage('', 0)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement