Guest User

Untitled

a guest
Apr 24th, 2018
84
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.12 KB | None | 0 0
  1. correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]
  2. predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]
  3.  
  4. correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]
  5. predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]
  6. target_names = ['PER','ORG','MISC','LOC','GPE', '*'] # keep '*' last
  7.  
  8. correct_flat = [item for sublist in correct for item in sublist]
  9. predicted_flat = [item for sublist in predicted for item in sublist]
  10.  
  11. from sklearn.metrics import classification_report
  12. print(classification_report(correct_flat, predicted_flat, target_names=target_names))
  13.  
  14. precision recall f1-score support
  15.  
  16. PER 1.00 0.86 0.92 7
  17. ORG 1.00 0.50 0.67 2
  18. MISC 0.00 0.00 0.00 0
  19. LOC 0.50 0.50 0.50 2
  20. GPE 0.00 0.00 0.00 1
  21.  
  22. avg / total 0.83 0.67 0.73 12
  23.  
  24. UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples.
Add Comment
Please, Sign In to add comment