Mitrezzz

СНЗ Лаб 5 Најсличен документ (нова)

Jan 3rd, 2019
262
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 17.70 KB | None | 0 0
  1. #!/usr/bin/python
  2. # -*- coding: utf-8 -*-
  3.  
  4. import re
  5. import math
  6.  
  7.  
  8. train_data=[
  9. ("""What Are We Searching for on Mars?
  10. Martians terrified me growing up. I remember watching the 1996 movie Mars Attacks! and fearing that the Red Planet harbored hostile alien neighbors. Though I was only 6 at the time, I was convinced life on Mars meant little green men wielding vaporizer guns. There was a time, not so long ago, when such an assumption about Mars wouldn’t have seemed so far-fetched.
  11. Like a child watching a scary movie, people freaked out after listening to “The War of the Worlds,” the now-infamous 1938 radio drama that many listeners believed was a real report about an invading Martian army. Before humans left Earth, humanity’s sense of what—or who—might be in our galactic neighborhood was, by today’s standards, remarkably optimistic.
  12. """,
  13. "science"),
  14. ("""Mountains of Ice are Melting, But Don't Panic (Op-Ed)
  15. If the planet lost the entire West Antarctic ice sheet, global sea level would rise 11 feet, threatening nearly 13 million people worldwide and affecting more than $2 trillion worth of property.
  16. Ice loss from West Antarctica has been increasing nearly three times faster in the past decade than during the previous one — and much more quickly than scientists predicted.
  17. This unprecedented ice loss is occurring because warm ocean water is rising from below and melting the base of the glaciers, dumping huge volumes of additional water — the equivalent of a Mt. Everest every two years — into the ocean.
  18. """,
  19. "science"),
  20. ("""Some scientists think we'll find signs of aliens within our lifetimes. Here's how.
  21. Finding extraterrestrial life is the essence of science fiction. But it's not so far-fetched to predict that we might find evidence of life on a distant planet within a generation.
  22. "With new telescopes coming online within the next five or ten years, we'll really have a chance to figure out whether we're alone in the universe," says Lisa Kaltenegger, an astronomer and director of Cornell's new Institute for Pale Blue Dots, which will search for habitable planets. "For the first time in human history, we might have the capability to do this."
  23. """,
  24. "science"),
  25. ("""'Magic' Mushrooms in Royal Garden: What Is Fly Agaric?
  26. Hallucinogenic mushrooms are perhaps the last thing you'd expect to find growing in the Queen of England's garden.
  27. Yet a type of mushroom called Amanita muscaria — commonly known as fly agaric, or fly amanita — was found growing in the gardens of Buckingham Palace by the producers of a television show, the Associated Press reported on Friday (Dec. 12).
  28. A. muscaria is a bright red-and-white mushroom, and the fungus is psychoactive when consumed.
  29. """,
  30. "science"),
  31. ("""Upcoming Parks : 'Lost Corner' Finds New Life in Sandy Springs
  32. At the corner of Brandon Mill Road, where Johnson Ferry Road turns into Dalrymple Road, tucked among 24 forested acres, sits an early 20th Century farmhouse. A vestige of Sandy Springs' past, the old home has found new life as the centerpiece of Lost Forest Preserve. While the preserve isn't slated to officially debut until some time next year, the city has opened the hiking trails to the public until construction begins on the permanent parking lot (at the moment the parking lot is a mulched area). The new park space includes community garden plots, a 4,000-foot-long hiking trail and an ADA-accessible trail through the densely wooded site. For Atlantans seeking an alternate escape to serenity (or those who dig local history), it's certainly worth a visit.
  33. """,
  34. "science"),
  35. ("""Stargazers across the world got a treat this weekend when the Geminids meteor shower gave the best holiday displays a run for their money.
  36. The meteor shower is called the "Geminids" because they appear as though they are shooting out of the constellation of Gemini. The meteors are thought to be small pieces of an extinct comment called 3200 Phaeton, a dust cloud revolving around the sun. Phaeton is thought to have lost all of its gas and to be slowly breaking apart into small particles.
  37. Earth runs into a stream of debris from 3200 Phaethon every year in mid-December, causing a shower of meteors, which hit its peak over the weekend.
  38. """,
  39. "science"),
  40. ("""Envisioning a River of Air
  41. By the classification rules of the world of physics, we all know that the Earth's atmosphere is made of gas (rather than liquid, solid, or plasma). But in the world of flying it's often useful to think
  42. """,
  43. "science"),
  44. ("""Following Sunday's 17-7 loss to the Seattle Seahawks, the San Francisco 49ers were officially eliminated from playoff contention, and they have referee Ed Hochuli to blame. OK, so they have a lot of folks to point the finger at for their 7-7 record, but Hochuli's incorrect call is the latest and easiest scapegoat.
  45. """
  46. ,"sport"),
  47. ("""Kobe Bryant and his teammates have an odd relationship. That makes sense: Kobe Bryant is an odd guy, and the Los Angeles Lakers are an odd team.
  48. They’re also, for the first time this season, the proud owners of a three-game winning streak. On top of that, you may have heard, Kobe Bryant passed Michael Jordan on Sunday evening to move into third place on the NBA’s all-time scoring list.
  49. """
  50. ,"sport"),
  51. ("""The Patriots continued their divisional dominance and are close to clinching home-field advantage throughout the AFC playoffs. Meanwhile, both the Colts and Broncos again won their division titles with head-to-head wins.The Bills' upset of the Packers delivered a big blow to Green Bay's shot at clinching home-field advantage throughout the NFC playoffs. Detroit seized on the opportunity and now leads the NFC North.
  52. """
  53. ,"sport"),
  54. ("""If you thought the Washington Redskins secondary was humbled by another scintillating performance from New Yorks Giants rookie wide receiver sensation Odell Beckham Jr., think again.In what is becoming a weekly occurrence, Beckham led NFL highlight reels on Sunday, collecting 12 catches for 143 yards and three touchdowns in Sunday's 24-13 victory against an NFC East rival.
  55. """
  56. ,"sport")
  57. ,("""That was two touchdowns and 110 total yards for the three running backs. We break down the fantasy implications.The New England Patriots' rushing game has always been tough to handicap. Sunday, all three of the team's primary running backs put up numbers, and all in different ways, but it worked for the team, as the Patriots beat the Miami Dolphins, 41-13.
  58. """
  59. ,"sport"),
  60. ("""General Santos (Philippines) (AFP) - Philippine boxing legend Manny Pacquiao vowed to chase Floyd Mayweather into ring submission after his US rival offered to fight him next year in a blockbuster world title face-off. "He (Mayweather) has reached a dead end. He has nowhere to run but to fight me," Pacquiao told AFP late Saturday, hours after the undefeated Mayweather issued the May 2 challenge on US television. The two were long-time rivals as the "best pound-for-pound" boxers of their generation, but the dream fight has never materialised to the disappointment of the boxing world.
  61. """
  62. ,"sport"),
  63. ("""When St. John's landed Rysheed Jordan, the consensus was that he would be an excellent starter.
  64. So far, that's half true.
  65. Jordan came off the bench Sunday and tied a career high by scoring 24 points to lead No. 24 St. John's to a 74-53 rout of Fordham in the ECAC Holiday Festival.
  66. ''I thought Rysheed played with poise,'' Red Storm coach Steve Lavin said. ''Played with the right pace. Near perfect game.''
  67. """
  68. ,"sport"),
  69. ("""Five-time world player of the year Marta scored three goals to lead Brazil to a 3-2 come-from-behind win over the U.S. women's soccer team in the International Tournament of Brasilia on Sunday. Carli Lloyd and Megan Rapinoe scored a goal each in the first 10 minutes to give the U.S. an early lead, but Marta netted in the 19th, 55th and 66th minutes to guarantee the hosts a spot in the final of the four-team competition.
  70. """
  71. ,"sport")
  72. ]
  73.  
  74.  
  75. # In[30]:
  76.  
  77.  
  78. import re
  79.  
  80.  
  81. # In[31]:
  82.  
  83.  
  84. p_words = re.compile(r'\w+')
  85.  
  86.  
  87. # p_words = re.compile(r'[^\n\r\t\.\:\,\d\!\?; ]+')
  88. # p_words = re.compile(r'[\w\']+')
  89.  
  90.  
  91. def parse_line(line):
  92.     words = p_words.findall(line)
  93.     return [w.lower() for w in words]
  94.  
  95.  
  96. # ![TF-IDF](4.SNZ_tf-idf.png)
  97.  
  98. # # Caclulate Document Frequency (DF) and Inverser Document Frequency (IDF)
  99.  
  100. # In[32]:
  101.  
  102.  
  103. import pprint
  104. import math
  105. df = {}
  106. vocab=set()
  107. documents = []
  108. i = 0
  109. for doc_text, label in train_data:
  110.     words = parse_line(doc_text)
  111.     #print(words)
  112.     documents.append(words)
  113.     #print(documents)
  114.     words_set=set(words)
  115.     #print(words_set)
  116.     vocab.update(words_set)  # site zborovi vo korpusot (od site dokumenti)
  117.     for word in words_set:
  118.         # vo kolku dokumenti se srekjava ovoj zbor
  119.         df.setdefault(word, 0)
  120.         df[word]+=1
  121. # pprint.pprint(df)
  122. idf = {}
  123. N = float(len(train_data))
  124. for word, cdf in df.items():
  125.     idf[word]=math.log(N/cdf)
  126.  
  127.  
  128. # # Calculate TF, normalized TF, and TF * IDF
  129.  
  130. # ## This is the training process
  131.  
  132. # In[33]:
  133.  
  134.  
  135. def calc_vector(cur_tf_idf, vocabular):
  136.     vec = []
  137.     for word in vocab:
  138.         tf_idf = cur_tf_idf.get(word, 0)
  139.         vec.append(tf_idf)
  140.     return vec
  141.  
  142.  
  143. # In[34]:
  144.  
  145.  
  146. def proccess_document(doc, idf, vocabular):
  147.     if isinstance(doc, str):
  148.         words = parse_line(doc)
  149.     else:
  150.         words = doc
  151.     f={}  # kolku pati se javuva sekoj zbor vo ovoj dokument
  152.     for word in words:
  153.         f.setdefault(word, 0)
  154.         f[word]+=1
  155. #     pprint.pprint(f)
  156.     max_f=max(f.values())  # kolku pati se javuva najcestiot zbor vo ovoj dokument
  157.     tf_idf = {}
  158.     for word, cnt in f.items():
  159. #         print(w,cnt)
  160.         ctf = cnt*1.0/max_f
  161.         tf_idf[word] = ctf * idf.get(word, 0)
  162.     vec = calc_vector(tf_idf, vocabular)
  163.     return vec
  164.  
  165.  
  166. # In[35]:
  167.  
  168.  
  169. freq=[]
  170. doc_vectors = []
  171. w = []
  172. tf_idfs = []
  173. for words in documents:
  174.     vec = proccess_document(words, idf, vocab)
  175.     doc_vectors.append(vec)
  176.  
  177.  
  178. # In[37]:
  179.  
  180.  
  181. import math
  182. def cosine_similarity(v1, v2):
  183.     """compute cosine similarity of v1 to v2: (v1 dot v2)/{||v1||*||v2||)"""
  184.     sumxx, sumxy, sumyy = 0, 0, 0
  185.     for i in range(len(v1)):
  186.         x = v1[i]
  187.         y = v2[i]
  188.         sumxx += x*x
  189.         sumyy += y*y
  190.         sumxy += x*y
  191.     return sumxy/math.sqrt(sumxx*sumyy)
  192.    
  193.  
  194.  
  195. # # Caclulate the distances between documents
  196.  
  197. # In[38]:
  198.  
  199.  
  200. distances = {}
  201. for i in range(len(train_data)-1):
  202.     for j in range(i+1, len(train_data)):
  203.         v1 = doc_vectors[i]
  204.         v2 = doc_vectors[j]
  205.         dist = cosine_similarity(v1, v2)
  206.         distances[(i,j)]=dist
  207.        
  208.  
  209.  
  210. # # Print the most similar document pairs
  211.  
  212. def proccess_document(doc, idf, vocabular):
  213.     if isinstance(doc, str):
  214.         words = parse_line(doc)
  215.     else:
  216.         words = doc
  217.     f={}  # kolku pati se javuva sekoj zbor vo ovoj dokument
  218.     for word in words:
  219.         f.setdefault(word, 0)
  220.         f[word]+=1
  221. #     pprint.pprint(f)
  222.     max_f=max(f.values())  # kolku pati se javuva najcestiot zbor vo ovoj dokument
  223.     tf_idf = {}
  224.     for word, cnt in f.items():
  225. #         print(w,cnt)
  226.         ctf = cnt*1.0/max_f
  227.         tf_idf[word] = ctf * idf.get(word, 0)
  228.     vec = calc_vector(tf_idf, vocabular)
  229.     return vec
  230.  
  231.  
  232. # In[42]:
  233.  
  234.  
  235. def rank_documents(doc, idf, vocabular, doc_vectors):
  236.     query_vec = proccess_document(doc, idf, vocabular)
  237.     similarities = []
  238.     index = 0;
  239.     for i, doc_vec in enumerate(doc_vectors):
  240.         dist = cosine_similarity(query_vec, doc_vec)
  241.         similarities.append((dist, i))
  242.     similarities.sort(reverse=True)
  243.     return similarities[0][1]
  244.  
  245.  
  246. # In[43]:
  247.  
  248.  
  249.  
  250.  
  251. def classify(query, idf, vocab, doc_vectors, topN=5):
  252.     res=rank_documents(query, idf, vocab, doc_vectors)
  253.     labels={}
  254.     for dist, i in res[:topN]:
  255.         label = train_data[i][1]
  256.         labels.setdefault(label, 0)
  257.         labels[label]+=1
  258.     results=sorted(labels.items(), key=lambda x:x[1], reverse=True)
  259.     final_label = results[0][0]
  260.     prob = results[0][1]/topN
  261.     return final_label, prob
  262.  
  263.  
  264. dataset = []
  265. for i, doc_vec in enumerate(doc_vectors):
  266.     doc_vec.append(train_data[i][1])
  267.     dataset.append(doc_vec)
  268.  
  269.  
  270. def uniquecounts(rows):
  271.     d={}
  272.     for r in rows:
  273. #         print(r[-1])
  274.         d.setdefault(r[-1], 0)
  275.         d[r[-1]]+=1
  276.     return d
  277.  
  278. class decisionnode(object):
  279.     def __init__(self, col=-1, value=None, results=None, tb=None, fb=None):
  280.         self.col = col
  281.         self.value = value
  282.         self.results = results
  283.         self.tb = tb
  284.         self.fb = fb
  285.  
  286.  
  287. def sporedi_broj(value1, value2):
  288.     return value1 >= value2
  289.  
  290.  
  291. def sporedi_string(value1, value2):
  292.     return value1 == value2
  293.  
  294. def get_compare_func(value):
  295.     if isinstance(value, int) or isinstance(value, float):
  296.         comparer = sporedi_broj
  297.     else:
  298.         comparer = sporedi_string
  299.     return comparer
  300.  
  301. def compare_values(v1, v2):
  302.     sporedi = get_compare_func(v1)
  303.     return sporedi(v1, v2)
  304.  
  305. def divideset(rows, column, value):
  306.     sporedi = get_compare_func(value)
  307. #     print(split_function)
  308.     # Divide the rows into two sets and return them
  309.     set_false = []
  310.     set_true = []
  311.     for row in rows:
  312.         uslov=sporedi(row[column], value)
  313. #         print(column, value, row[column], uslov, row)
  314.         if uslov:
  315.             set_true.append(row)
  316.         else:
  317.             set_false.append(row)
  318. #     print(len(set_true), len(set_false))
  319. #     set_true = [row for row in rows if
  320. #             split_function(row, column, value)]  # za sekoj row od rows za koj split_function vrakja true
  321. #     set_false = [row for row in rows if
  322. #             not split_function(row, column, value)]  # za sekoj row od rows za koj split_function vrakja false
  323.     return (set_true, set_false)
  324.  
  325. def entropy(rows):
  326.     from math import log
  327.     log2 = lambda x: log(x) / log(2)
  328.     results = uniquecounts(rows)
  329.     ent = 0.0
  330.     for r in results.keys():
  331.         p = float(results[r]) / len(rows)
  332.         ent = ent - p * log2(p)
  333.  
  334.     return ent
  335.  
  336. def info_gain(current_score, sets, scoref=entropy):
  337.     m = sum([len(s) for s in sets])
  338.     gain = current_score
  339.     for s in sets:
  340.         n=len(s)
  341.         p=1.*n/m
  342.         gain -= p*scoref(s)
  343.     return gain
  344.  
  345. def buildtree(rows, scoref=entropy):
  346.     if len(rows) == 0:
  347.         return decisionnode()
  348.     current_score = scoref(rows)
  349.  
  350.     # Set up some variables to track the best criteria
  351.     best_gain = 0.0
  352.     best_column = -1
  353.     best_value = None
  354.     best_subsetf = None
  355.     best_subsett = None
  356.    
  357.     column_count = len(rows[0]) - 1
  358.     for col in range(column_count):
  359.         # Generate the list of different values in
  360.         # this column
  361. #         column_values = set()
  362. #         for row in rows:
  363. #             column_values.add(row[col])
  364. #         print(column_values)
  365.         column_values = set([row[col] for row in rows])
  366. #         print('Zemame vo predvid podelba po:', col, len(column_values), column_values)
  367. #         continue
  368.         # Now try dividing the rows up for each value
  369.         # in this column
  370.         for value in column_values:
  371.             sets = divideset(rows, col, value)
  372.  
  373.             # Information gain
  374. #             p = float(len(set1)) / len(rows)
  375. #             gain = current_score - p * scoref(set1) - (1 - p) * scoref(set2)
  376.             gain = info_gain(current_score, sets, scoref)
  377.             if gain > best_gain and len(sets)>0 and len(sets[0]) > 0 and len(sets[1]) > 0:
  378.                 best_gain = gain
  379.                 best_column = col
  380.                 best_value = value
  381.                 best_subsett = sets[0]
  382.                 best_subsetf = sets[1]
  383.                 # best_criteria = (col, value)
  384.                 # best_sets = (set1, set2)
  385. #             print('Dividing dataset', col, value, gain, sets)
  386.     # pronajden e korenot
  387. #     return
  388.     # Create the subbranches
  389.     if best_gain > 0:
  390. #         print(best_subsett)
  391. #         print(best_subsetf)
  392.         trueBranch = buildtree(best_subsett, scoref)
  393.         falseBranch = buildtree(best_subsetf, scoref)
  394.         return decisionnode(col=best_column, value=best_value,
  395.                             tb=trueBranch, fb=falseBranch)
  396.  
  397.     else:
  398.         return decisionnode(results=uniquecounts(rows))
  399.    
  400. def printtree(tree, indent=''):
  401.     # Is this a leaf node?
  402.     if tree.results is not None:
  403.         print(indent + str(sorted(tree.results.items())))
  404.     else:
  405.         # Print the criteria
  406.         print(indent + str(tree.col) + ':' + str(tree.value) + '? ')
  407.         # Print the branches
  408.         print(indent + 'T->')
  409.         printtree(tree.tb, indent + '  ')
  410.         print(indent + 'F->')
  411.         printtree(tree.fb, indent + '  ')
  412.  
  413.            
  414.  
  415. def classify(observation, tree):
  416.     if tree.results != None:
  417. #         return tree.results
  418.         if len(tree.results)==1:
  419.             return list(tree.results.keys())[0], 1.0
  420.         else:
  421.             inv = sorted([(cnt, label) for label, cnt in tree.results.items()], reverse=True)
  422.             label = inv[0][1]
  423.             cnt = inv[0][0]
  424.             total_count = sum(tree.results.values())
  425.             return label, cnt/total_count
  426. #         return tree.results
  427.     else:
  428.         vrednost = observation[tree.col]
  429.         if compare_values(vrednost, tree.value):
  430.            branch = tree.tb
  431.         else:
  432.            branch = tree.fb
  433.         return classify(observation, branch)
  434.    
  435. def classify_text(doc,tree):
  436.     query_vec = proccess_document(doc, idf, vocab)
  437.     return classify(query_vec,tree)
  438. def cosine_similarity(v1, v2):
  439.     sumxx, sumxy, sumyy = 0, 0, 0
  440.     for i in range(len(v1)):
  441.         x = v1[i]
  442.         y = v2[i]
  443.         sumxx += x*x
  444.         sumyy += y*y
  445.         sumxy += x*y
  446.     return sumxy/math.sqrt(sumxx*sumyy)
  447.  
  448. tree = buildtree(dataset)
  449. text = input()
  450. index = rank_documents(text, idf, tree, doc_vectors)
  451.  
  452. print(train_data[index][0])
Add Comment
Please, Sign In to add comment