Skip to content

Question about the standard_devs in CalibrationAUC #2

@fate1997

Description

@fate1997

In the CalibrationAUC, the standard_devs are defined by:
standard_devs = [np.abs(set_['error'])/set_['confidence'] for set_ in data['sets_by_confidence']]

I am confused about the set_['confidence'] here because the confidence is calculated by 1. / ((alphas-1) * lambdas) as in the "predict.py" file, while this value is not the square root of the variance of mean values (betas / ((alphas-1) * lambdas).

By the way, I also noticed that the confidence calculated here is different from the uncertainty defined in Figure 2B of your paper, may I ask why using this metric (1. / ((alphas-1) * lambdas)) to evaluate confidence (or uncertainty).

In your repository of "evidential-deep-learning", I found the calibration plot is drawn with the standard deviation betas / ((alphas-1) * lambdas), and the confidence is also measured by this value. I wonder why it changes in these two repositories.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions