Advertisement
teodor_dalavera

Најсличен документ

Jan 14th, 2019
1,100
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 18.04 KB | None | 0 0
  1. #Најсличен документ
  2.  
  3. #За документот на влез, најдете го најсличниот документ од тренинг множеството користејќи косинусно растојание. Да се испечати #документот.
  4.  
  5. #!/usr/bin/python
  6. # -*- coding: utf-8 -*-
  7.  
  8. import re
  9. import math
  10.  
  11.  
  12. train_data=[
  13. ("""What Are We Searching for on Mars?
  14. Martians terrified me growing up. I remember watching the 1996 movie Mars Attacks! and fearing that the Red Planet harbored hostile alien neighbors. Though I was only 6 at the time, I was convinced life on Mars meant little green men wielding vaporizer guns. There was a time, not so long ago, when such an assumption about Mars wouldn’t have seemed so far-fetched.
  15. Like a child watching a scary movie, people freaked out after listening to “The War of the Worlds,” the now-infamous 1938 radio drama that many listeners believed was a real report about an invading Martian army. Before humans left Earth, humanity’s sense of what—or who—might be in our galactic neighborhood was, by today’s standards, remarkably optimistic.
  16. """,
  17. "science"),
  18. ("""Mountains of Ice are Melting, But Don't Panic (Op-Ed)
  19. If the planet lost the entire West Antarctic ice sheet, global sea level would rise 11 feet, threatening nearly 13 million people worldwide and affecting more than $2 trillion worth of property.
  20. Ice loss from West Antarctica has been increasing nearly three times faster in the past decade than during the previous one — and much more quickly than scientists predicted.
  21. This unprecedented ice loss is occurring because warm ocean water is rising from below and melting the base of the glaciers, dumping huge volumes of additional water — the equivalent of a Mt. Everest every two years — into the ocean.
  22. """,
  23. "science"),
  24. ("""Some scientists think we'll find signs of aliens within our lifetimes. Here's how.
  25. Finding extraterrestrial life is the essence of science fiction. But it's not so far-fetched to predict that we might find evidence of life on a distant planet within a generation.
  26. "With new telescopes coming online within the next five or ten years, we'll really have a chance to figure out whether we're alone in the universe," says Lisa Kaltenegger, an astronomer and director of Cornell's new Institute for Pale Blue Dots, which will search for habitable planets. "For the first time in human history, we might have the capability to do this."
  27. """,
  28. "science"),
  29. ("""'Magic' Mushrooms in Royal Garden: What Is Fly Agaric?
  30. Hallucinogenic mushrooms are perhaps the last thing you'd expect to find growing in the Queen of England's garden.
  31. Yet a type of mushroom called Amanita muscaria — commonly known as fly agaric, or fly amanita — was found growing in the gardens of Buckingham Palace by the producers of a television show, the Associated Press reported on Friday (Dec. 12).
  32. A. muscaria is a bright red-and-white mushroom, and the fungus is psychoactive when consumed.
  33. """,
  34. "science"),
  35. ("""Upcoming Parks : 'Lost Corner' Finds New Life in Sandy Springs
  36. At the corner of Brandon Mill Road, where Johnson Ferry Road turns into Dalrymple Road, tucked among 24 forested acres, sits an early 20th Century farmhouse. A vestige of Sandy Springs' past, the old home has found new life as the centerpiece of Lost Forest Preserve. While the preserve isn't slated to officially debut until some time next year, the city has opened the hiking trails to the public until construction begins on the permanent parking lot (at the moment the parking lot is a mulched area). The new park space includes community garden plots, a 4,000-foot-long hiking trail and an ADA-accessible trail through the densely wooded site. For Atlantans seeking an alternate escape to serenity (or those who dig local history), it's certainly worth a visit.
  37. """,
  38. "science"),
  39. ("""Stargazers across the world got a treat this weekend when the Geminids meteor shower gave the best holiday displays a run for their money.
  40. The meteor shower is called the "Geminids" because they appear as though they are shooting out of the constellation of Gemini. The meteors are thought to be small pieces of an extinct comment called 3200 Phaeton, a dust cloud revolving around the sun. Phaeton is thought to have lost all of its gas and to be slowly breaking apart into small particles.
  41. Earth runs into a stream of debris from 3200 Phaethon every year in mid-December, causing a shower of meteors, which hit its peak over the weekend.
  42. """,
  43. "science"),
  44. ("""Envisioning a River of Air
  45. By the classification rules of the world of physics, we all know that the Earth's atmosphere is made of gas (rather than liquid, solid, or plasma). But in the world of flying it's often useful to think
  46. """,
  47. "science"),
  48. ("""Following Sunday's 17-7 loss to the Seattle Seahawks, the San Francisco 49ers were officially eliminated from playoff contention, and they have referee Ed Hochuli to blame. OK, so they have a lot of folks to point the finger at for their 7-7 record, but Hochuli's incorrect call is the latest and easiest scapegoat.
  49. """
  50. ,"sport"),
  51. ("""Kobe Bryant and his teammates have an odd relationship. That makes sense: Kobe Bryant is an odd guy, and the Los Angeles Lakers are an odd team.
  52. They’re also, for the first time this season, the proud owners of a three-game winning streak. On top of that, you may have heard, Kobe Bryant passed Michael Jordan on Sunday evening to move into third place on the NBA’s all-time scoring list.
  53. """
  54. ,"sport"),
  55. ("""The Patriots continued their divisional dominance and are close to clinching home-field advantage throughout the AFC playoffs. Meanwhile, both the Colts and Broncos again won their division titles with head-to-head wins.The Bills' upset of the Packers delivered a big blow to Green Bay's shot at clinching home-field advantage throughout the NFC playoffs. Detroit seized on the opportunity and now leads the NFC North.
  56. """
  57. ,"sport"),
  58. ("""If you thought the Washington Redskins secondary was humbled by another scintillating performance from New Yorks Giants rookie wide receiver sensation Odell Beckham Jr., think again.In what is becoming a weekly occurrence, Beckham led NFL highlight reels on Sunday, collecting 12 catches for 143 yards and three touchdowns in Sunday's 24-13 victory against an NFC East rival.
  59. """
  60. ,"sport")
  61. ,("""That was two touchdowns and 110 total yards for the three running backs. We break down the fantasy implications.The New England Patriots' rushing game has always been tough to handicap. Sunday, all three of the team's primary running backs put up numbers, and all in different ways, but it worked for the team, as the Patriots beat the Miami Dolphins, 41-13.
  62. """
  63. ,"sport"),
  64. ("""General Santos (Philippines) (AFP) - Philippine boxing legend Manny Pacquiao vowed to chase Floyd Mayweather into ring submission after his US rival offered to fight him next year in a blockbuster world title face-off. "He (Mayweather) has reached a dead end. He has nowhere to run but to fight me," Pacquiao told AFP late Saturday, hours after the undefeated Mayweather issued the May 2 challenge on US television. The two were long-time rivals as the "best pound-for-pound" boxers of their generation, but the dream fight has never materialised to the disappointment of the boxing world.
  65. """
  66. ,"sport"),
  67. ("""When St. John's landed Rysheed Jordan, the consensus was that he would be an excellent starter.
  68. So far, that's half true.
  69. Jordan came off the bench Sunday and tied a career high by scoring 24 points to lead No. 24 St. John's to a 74-53 rout of Fordham in the ECAC Holiday Festival.
  70. ''I thought Rysheed played with poise,'' Red Storm coach Steve Lavin said. ''Played with the right pace. Near perfect game.''
  71. """
  72. ,"sport"),
  73. ("""Five-time world player of the year Marta scored three goals to lead Brazil to a 3-2 come-from-behind win over the U.S. women's soccer team in the International Tournament of Brasilia on Sunday. Carli Lloyd and Megan Rapinoe scored a goal each in the first 10 minutes to give the U.S. an early lead, but Marta netted in the 19th, 55th and 66th minutes to guarantee the hosts a spot in the final of the four-team competition.
  74. """
  75. ,"sport")
  76. ]
  77.  
  78.  
  79. # In[30]:
  80.  
  81.  
  82. import re
  83.  
  84.  
  85. # In[31]:
  86.  
  87.  
  88. p_words = re.compile(r'\w+')
  89.  
  90.  
  91. # p_words = re.compile(r'[^\n\r\t\.\:\,\d\!\?; ]+')
  92. # p_words = re.compile(r'[\w\']+')
  93.  
  94.  
  95. def parse_line(line):
  96.     words = p_words.findall(line)
  97.     return [w.lower() for w in words]
  98.  
  99.  
  100. # ![TF-IDF](4.SNZ_tf-idf.png)
  101.  
  102. # # Caclulate Document Frequency (DF) and Inverser Document Frequency (IDF)
  103.  
  104. # In[32]:
  105.  
  106.  
  107. import pprint
  108. import math
  109. df = {}
  110. vocab=set()
  111. documents = []
  112. i = 0
  113. for doc_text, label in train_data:
  114.     words = parse_line(doc_text)
  115.     #print(words)
  116.     documents.append(words)
  117.     #print(documents)
  118.     words_set=set(words)
  119.     #print(words_set)
  120.     vocab.update(words_set)  # site zborovi vo korpusot (od site dokumenti)
  121.     for word in words_set:
  122.         # vo kolku dokumenti se srekjava ovoj zbor
  123.         df.setdefault(word, 0)
  124.         df[word]+=1
  125. # pprint.pprint(df)
  126. idf = {}
  127. N = float(len(train_data))
  128. for word, cdf in df.items():
  129.     idf[word]=math.log(N/cdf)
  130.  
  131.  
  132. # # Calculate TF, normalized TF, and TF * IDF
  133.  
  134. # ## This is the training process
  135.  
  136. # In[33]:
  137.  
  138.  
  139. def calc_vector(cur_tf_idf, vocabular):
  140.     vec = []
  141.     for word in vocab:
  142.         tf_idf = cur_tf_idf.get(word, 0)
  143.         vec.append(tf_idf)
  144.     return vec
  145.  
  146.  
  147. # In[34]:
  148.  
  149.  
  150. def proccess_document(doc, idf, vocabular):
  151.     if isinstance(doc, str):
  152.         words = parse_line(doc)
  153.     else:
  154.         words = doc
  155.     f={}  # kolku pati se javuva sekoj zbor vo ovoj dokument
  156.     for word in words:
  157.         f.setdefault(word, 0)
  158.         f[word]+=1
  159. #     pprint.pprint(f)
  160.     max_f=max(f.values())  # kolku pati se javuva najcestiot zbor vo ovoj dokument
  161.     tf_idf = {}
  162.     for word, cnt in f.items():
  163. #         print(w,cnt)
  164.         ctf = cnt*1.0/max_f
  165.         tf_idf[word] = ctf * idf.get(word, 0)
  166.     vec = calc_vector(tf_idf, vocabular)
  167.     return vec
  168.  
  169.  
  170. # In[35]:
  171.  
  172.  
  173. freq=[]
  174. doc_vectors = []
  175. w = []
  176. tf_idfs = []
  177. for words in documents:
  178.     vec = proccess_document(words, idf, vocab)
  179.     doc_vectors.append(vec)
  180.  
  181.  
  182. # In[37]:
  183.  
  184.  
  185. import math
  186. def cosine_similarity(v1, v2):
  187.     """compute cosine similarity of v1 to v2: (v1 dot v2)/{||v1||*||v2||)"""
  188.     sumxx, sumxy, sumyy = 0, 0, 0
  189.     for i in range(len(v1)):
  190.         x = v1[i]
  191.         y = v2[i]
  192.         sumxx += x*x
  193.         sumyy += y*y
  194.         sumxy += x*y
  195.     return sumxy/math.sqrt(sumxx*sumyy)
  196.    
  197.  
  198.  
  199. # # Caclulate the distances between documents
  200.  
  201. # In[38]:
  202.  
  203.  
  204. distances = {}
  205. for i in range(len(train_data)-1):
  206.     for j in range(i+1, len(train_data)):
  207.         v1 = doc_vectors[i]
  208.         v2 = doc_vectors[j]
  209.         dist = cosine_similarity(v1, v2)
  210.         distances[(i,j)]=dist
  211.        
  212.  
  213.  
  214. # # Print the most similar document pairs
  215.  
  216. def proccess_document(doc, idf, vocabular):
  217.     if isinstance(doc, str):
  218.         words = parse_line(doc)
  219.     else:
  220.         words = doc
  221.     f={}  # kolku pati se javuva sekoj zbor vo ovoj dokument
  222.     for word in words:
  223.         f.setdefault(word, 0)
  224.         f[word]+=1
  225. #     pprint.pprint(f)
  226.     max_f=max(f.values())  # kolku pati se javuva najcestiot zbor vo ovoj dokument
  227.     tf_idf = {}
  228.     for word, cnt in f.items():
  229. #         print(w,cnt)
  230.         ctf = cnt*1.0/max_f
  231.         tf_idf[word] = ctf * idf.get(word, 0)
  232.     vec = calc_vector(tf_idf, vocabular)
  233.     return vec
  234.  
  235.  
  236. # In[42]:
  237.  
  238.  
  239. def rank_documents(doc, idf, vocabular, doc_vectors):
  240.     query_vec = proccess_document(doc, idf, vocabular)
  241.     similarities = []
  242.     for i, doc_vec in enumerate(doc_vectors):
  243.         dist = cosine_similarity(query_vec, doc_vec)
  244.         similarities.append((dist, i))
  245.     similarities.sort(reverse=True)
  246.     return similarities[0][1]
  247.  
  248.  
  249. # In[43]:
  250.  
  251.  
  252.  
  253.  
  254. def classify(query, idf, vocab, doc_vectors, topN=5):
  255.     res=rank_documents(query, idf, vocab, doc_vectors)
  256.     labels={}
  257.     for dist, i in res[:topN]:
  258.         label = train_data[i][1]
  259.         labels.setdefault(label, 0)
  260.         labels[label]+=1
  261.     results=sorted(labels.items(), key=lambda x:x[1], reverse=True)
  262.     final_label = results[0][0]
  263.     prob = results[0][1]/topN
  264.     return final_label, prob
  265.  
  266.  
  267. dataset = []
  268. for i, doc_vec in enumerate(doc_vectors):
  269.     doc_vec.append(train_data[i][1])
  270.     dataset.append(doc_vec)
  271.  
  272.  
  273. def uniquecounts(rows):
  274.     d={}
  275.     for r in rows:
  276. #         print(r[-1])
  277.         d.setdefault(r[-1], 0)
  278.         d[r[-1]]+=1
  279.     return d
  280.  
  281. class decisionnode(object):
  282.     def __init__(self, col=-1, value=None, results=None, tb=None, fb=None):
  283.         self.col = col
  284.         self.value = value
  285.         self.results = results
  286.         self.tb = tb
  287.         self.fb = fb
  288.  
  289.  
  290. def sporedi_broj(value1, value2):
  291.     return value1 >= value2
  292.  
  293.  
  294. def sporedi_string(value1, value2):
  295.     return value1 == value2
  296.  
  297. def get_compare_func(value):
  298.     if isinstance(value, int) or isinstance(value, float):
  299.         comparer = sporedi_broj
  300.     else:
  301.         comparer = sporedi_string
  302.     return comparer
  303.  
  304. def compare_values(v1, v2):
  305.     sporedi = get_compare_func(v1)
  306.     return sporedi(v1, v2)
  307.  
  308. def divideset(rows, column, value):
  309.     sporedi = get_compare_func(value)
  310. #     print(split_function)
  311.     # Divide the rows into two sets and return them
  312.     set_false = []
  313.     set_true = []
  314.     for row in rows:
  315.         uslov=sporedi(row[column], value)
  316. #         print(column, value, row[column], uslov, row)
  317.         if uslov:
  318.             set_true.append(row)
  319.         else:
  320.             set_false.append(row)
  321. #     print(len(set_true), len(set_false))
  322. #     set_true = [row for row in rows if
  323. #             split_function(row, column, value)]  # za sekoj row od rows za koj split_function vrakja true
  324. #     set_false = [row for row in rows if
  325. #             not split_function(row, column, value)]  # za sekoj row od rows za koj split_function vrakja false
  326.     return (set_true, set_false)
  327.  
  328. def entropy(rows):
  329.     from math import log
  330.     log2 = lambda x: log(x) / log(2)
  331.     results = uniquecounts(rows)
  332.     ent = 0.0
  333.     for r in results.keys():
  334.         p = float(results[r]) / len(rows)
  335.         ent = ent - p * log2(p)
  336.  
  337.     return ent
  338.  
  339. def info_gain(current_score, sets, scoref=entropy):
  340.     m = sum([len(s) for s in sets])
  341.     gain = current_score
  342.     for s in sets:
  343.         n=len(s)
  344.         p=1.*n/m
  345.         gain -= p*scoref(s)
  346.     return gain
  347.  
  348. def buildtree(rows, scoref=entropy):
  349.     if len(rows) == 0:
  350.         return decisionnode()
  351.     current_score = scoref(rows)
  352.  
  353.     # Set up some variables to track the best criteria
  354.     best_gain = 0.0
  355.     best_column = -1
  356.     best_value = None
  357.     best_subsetf = None
  358.     best_subsett = None
  359.    
  360.     column_count = len(rows[0]) - 1
  361.     for col in range(column_count):
  362.         # Generate the list of different values in
  363.         # this column
  364. #         column_values = set()
  365. #         for row in rows:
  366. #             column_values.add(row[col])
  367. #         print(column_values)
  368.         column_values = set([row[col] for row in rows])
  369. #         print('Zemame vo predvid podelba po:', col, len(column_values), column_values)
  370. #         continue
  371.         # Now try dividing the rows up for each value
  372.         # in this column
  373.         for value in column_values:
  374.             sets = divideset(rows, col, value)
  375.  
  376.             # Information gain
  377. #             p = float(len(set1)) / len(rows)
  378. #             gain = current_score - p * scoref(set1) - (1 - p) * scoref(set2)
  379.             gain = info_gain(current_score, sets, scoref)
  380.             if gain > best_gain and len(sets)>0 and len(sets[0]) > 0 and len(sets[1]) > 0:
  381.                 best_gain = gain
  382.                 best_column = col
  383.                 best_value = value
  384.                 best_subsett = sets[0]
  385.                 best_subsetf = sets[1]
  386.                 # best_criteria = (col, value)
  387.                 # best_sets = (set1, set2)
  388. #             print('Dividing dataset', col, value, gain, sets)
  389.     # pronajden e korenot
  390. #     return
  391.     # Create the subbranches
  392.     if best_gain > 0:
  393. #         print(best_subsett)
  394. #         print(best_subsetf)
  395.         trueBranch = buildtree(best_subsett, scoref)
  396.         falseBranch = buildtree(best_subsetf, scoref)
  397.         return decisionnode(col=best_column, value=best_value,
  398.                             tb=trueBranch, fb=falseBranch)
  399.  
  400.     else:
  401.         return decisionnode(results=uniquecounts(rows))
  402.    
  403. def printtree(tree, indent=''):
  404.     # Is this a leaf node?
  405.     if tree.results is not None:
  406.         print(indent + str(sorted(tree.results.items())))
  407.     else:
  408.         # Print the criteria
  409.         print(indent + str(tree.col) + ':' + str(tree.value) + '? ')
  410.         # Print the branches
  411.         print(indent + 'T->')
  412.         printtree(tree.tb, indent + '  ')
  413.         print(indent + 'F->')
  414.         printtree(tree.fb, indent + '  ')
  415.  
  416.            
  417.  
  418. def classify(observation, tree):
  419.     if tree.results != None:
  420. #         return tree.results
  421.         if len(tree.results)==1:
  422.             return list(tree.results.keys())[0], 1.0
  423.         else:
  424.             inv = sorted([(cnt, label) for label, cnt in tree.results.items()], reverse=True)
  425.             label = inv[0][1]
  426.             cnt = inv[0][0]
  427.             total_count = sum(tree.results.values())
  428.             return label, cnt/total_count
  429. #         return tree.results
  430.     else:
  431.         vrednost = observation[tree.col]
  432.         if compare_values(vrednost, tree.value):
  433.            branch = tree.tb
  434.         else:
  435.            branch = tree.fb
  436.         return classify(observation, branch)
  437.    
  438. def classify_text(doc,tree):
  439.     query_vec = proccess_document(doc, idf, vocab)
  440.     return classify(query_vec,tree)
  441. def cosine_similarity(v1, v2):
  442.     sumxx, sumxy, sumyy = 0, 0, 0
  443.     for i in range(len(v1)):
  444.         x = v1[i]
  445.         y = v2[i]
  446.         sumxx += x*x
  447.         sumyy += y*y
  448.         sumxy += x*y
  449.     return sumxy/math.sqrt(sumxx*sumyy)
  450.  
  451. tree = buildtree(dataset)
  452. text = input()
  453. index = rank_documents(text, idf, tree, doc_vectors)
  454.  
  455. print(train_data[index][0])
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement