DeepLUQ: Python Library for Deep Learning Uncertainty Quantification#

This library provides methods and metrics for capturing and quantifying uncertainty in deep learning models. It currently includes Monte Carlo Dropout (MC-Dropout) as the UQ method for capturing uncertainties in model predictions. The library integrates various UQ metrics to quantify uncertainties of two common deep-learning tasks, i.e., regression and classification.

Case Study Applications#

Assessing the Uncertainty and Robustness of the Laptop Refurbishing Software#

Github repository: chengjie-lu/sticker-detection-uncertainty-quantification

Paper: Chengjie Lu, Jiahui Wu, Shaukat Ali, and Mikkel Labori Olsen. “Assessing the Uncertainty and Robustness of the Laptop Refurbishing Software”. In 18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025. Preprint

Evaluating Uncertainty and Quality of Visual Language Action-enabled Robots#

Github repository: pablovalle/VLA_UQ

Paper: Valle, Pablo, Chengjie Lu, Shaukat Ali, and Aitor Arrieta. “Evaluating Uncertainty and Quality of Visual Language Action-enabled Robots.” arXiv preprint arXiv:2507.17049 (2025). Preprint

References#

Gal, Yarin, and Zoubin Ghahramani. “Dropout as a bayesian approximation: Representing model uncertainty in deep learning.” international conference on machine learning. PMLR, 2016.

Gal, Yarin. “Uncertainty in deep learning.” (2016).

Catak, Ferhat Ozgur, Tao Yue, and Shaukat Ali. “Prediction surface uncertainty quantification in object detection models for autonomous driving.” 2021 IEEE International Conference on Artificial Intelligence Testing (AITest). IEEE, 2021.