Scores

class ucca.evaluation.Scores(evaluator_results, name=None, evaluation_format=None)[source]

Bases: object

Methods Summary

aggregate(scores) Aggregate multiple Scores instances :param scores: iterable of Scores :return: new Scores with aggregated scores
average_f1([mode]) Calculate the average F1 score across primary and remote edges :param mode: LABELED, UNLABELED or WEAK_LABELED :return: a single number, the average F1
field_titles([constructions, eval_type, counts])
fields([eval_type, counts])
print([eval_type])
print_confusion_matrix(*args[, eval_type])
titles([eval_type, counts])

Methods Documentation

static aggregate(scores)[source]

Aggregate multiple Scores instances :param scores: iterable of Scores :return: new Scores with aggregated scores

average_f1(mode='labeled')[source]

Calculate the average F1 score across primary and remote edges :param mode: LABELED, UNLABELED or WEAK_LABELED :return: a single number, the average F1

static field_titles(constructions={'implicit': <ucca.constructions.Construction object>, 'primary': <ucca.constructions.Construction object>, 'remote': <ucca.constructions.Construction object>}, eval_type='labeled', counts=False)[source]
fields(eval_type='labeled', counts=False)[source]
print(eval_type=None, **kwargs)[source]
print_confusion_matrix(*args, eval_type=None, **kwargs)[source]
titles(eval_type='labeled', counts=False)[source]