Adding a custom metric
Custom Metrics#
This is a tutorial on how to add a new metric to MMF.
MMF is agnostic to the kind of metrics that can be added to it.
Adding a metric requires adding a metric class and adding your new metric to your config yaml.
For example, the ConcatBERT model uses the binary_f1 metric when evaluating on the hateful memes dataset.
The metric class is BinaryF1 defined in mmf/modules/metrics.py
The metric key binary_f1 is added to the list of metrics in the config yaml at mmf/projects/hateful_memes/configs/concat_bert/defaults.yaml.
Metric Class#
Add your metric class to metrics.py. It should be a subclass of BaseMetric.
Metrics should implement a function calculate with signature calculate(self, sample_list, model_output, *args, **kwargs),
where sample_list (SampleList) is the current batch and model_output is a dict return by your model for current sample_list.
Metrics Config#
Add the name of your new metric class to your evaluation config. Multiple metrics can be specified using a yaml array.
For metrics that take parameters your yaml config will specify params. You can also specify a custom key to be assigned to the metric. For example,
If your model uses early stopping, make sure that the early_stop.criteria is added as an evaluation metric. For example the vizwiz config,
Multi-Metric Classes#
If a loss class is responsible for calculating multiple metrics, for example, maybe due to shared calculations, you can return a dictionary of tensors.
For an example, take a look at the BinaryF1PrecisionRecall class in mmf/modules/metrics.py