Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]
- predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]
- correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]
- predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]
- target_names = ['PER','ORG','MISC','LOC','GPE', '*'] # keep '*' last
- correct_flat = [item for sublist in correct for item in sublist]
- predicted_flat = [item for sublist in predicted for item in sublist]
- from sklearn.metrics import classification_report
- print(classification_report(correct_flat, predicted_flat, target_names=target_names))
- precision recall f1-score support
- PER 1.00 0.86 0.92 7
- ORG 1.00 0.50 0.67 2
- MISC 0.00 0.00 0.00 0
- LOC 0.50 0.50 0.50 2
- GPE 0.00 0.00 0.00 1
- avg / total 0.83 0.67 0.73 12
- UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples.
Add Comment
Please, Sign In to add comment