Neural networks are often overconfident about their predictions, which undermines their reliability and trustworthiness.
In this presentation, I will present our work entitled Error-Driven Uncertainty Aware Training (EUAT), which aims to enhance the ability of neural classifiers to estimate their uncertainty correctly, namely, to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate.
The EUAT approach operates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted by the model. Next, I will address the current limitations of this method and outline the strategies employed in my ongoing research to address these challenges. In particular, I will emphasize the focus on enhancing the efficiency and generalizability of EUAT.