Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- // The minimum prediction confidence.
- const threshold = 0.9;
- // Which toxicity labels to return.
- const labelsToInclude = [‘identity_attack’, ‘insult’, ‘threat’];
- toxicity.load(threshold, labelsToInclude).then(model => {
- // Now you can use the `model` object to label sentences.
- model.classify([‘you suck’]).then(predictions => {...});
- });
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement