Deng-Bao Wang

I'm a final-year Ph.D. student (supervised by Prof. Min-Ling Zhang) of School of Computer Science and Engineering in Southeast University and a member of PALM group.

My research interests mainly include artificial intelligence, machine learning and data mining. I'm currently working on weakly supervised learning and uncertainty calibration of deep models. Additionally, I'm also interested in gaining a deeper understanding of deep models through insightful experiments.

Email  /  Address  /  Google Scholar  /  Github   

profile photo

Selected Publications  († denotes equal contribution)

Calibration Bottleneck: Over-compressed Representations are Less Calibratable
Deng-Bao Wang, Min-Ling Zhang
International Conference on Machine Learning (ICML), 2024   Code

We observe a U-shaped pattern in the calibratability of intermediate features, spanning from the lower to the upper layers.

Distilling Reliable Knowledge for Instance-Dependent Partial Label Learning
Dong-Dong Wu, Deng-Bao Wang, Min-Ling Zhang
AAAI Conference on Artificial Intelligence (AAAI), 2024   Code  Appendix

We introduce a novel distillation method that leverages a rectification process to DIstill Reliable Knowledge for IDPLL.

On the Pitfall of Mixup for Uncertainty Calibration
Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023   Code  Appendix

We observe the pitfall of Mixup on calibration and propose a general strategy named Mixup Inference in Training (MIT).

Adaptive Graph Guided Disambiguation for Partial Label Learning
Deng-Bao Wang, Min-Ling Zhang, Li Li
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022   Code  Appendix

The proposed method is shown to be effective in partial label disambiguation and considered as a strong baseline in PLL.

Revisiting Consistency Regularization for Deep Partial Label Learning
Dong-Dong Wu, Deng-Bao Wang, Min-Ling Zhang
International Conference on Machine Learning (ICML), 2022   Code  

The proposed method performs supervised learning on non-candidate labels and employs regularization on candidate labels.

Rethinking Calibration of Deep Neural Networks: Don't Be Afraid of Overconfidence
Deng-Bao Wang, Lei Feng, Min-Ling Zhang
Advances in Neural Information Processing Systems (NeurIPS), 2021   Code  Appendix

We point out that despite those regularized models are better calibrated, they suffer from not being as calibratable.

Learning from Complementary Labels via Partial-Output Consistency Regularization
Deng-Bao Wang, Lei Feng, Min-Ling Zhang
International Joint Conference on Artificial Intelligence (IJCAI), 2021   Code  

We give the first attempt to leverage consistency regularization for complementary label learning and achieve a new SOTA.

Learning from Noisy Labels with Complementary Loss Functions
Deng-Bao Wang, Yong Wen, Lujia Pan, Min-Ling Zhang
AAAI Conference on Artificial Intelligence (AAAI), 2021   Code  Appendix

We find that robust and non-robust loss can play complementary roles in a joint objective to better learn from noisy labels.

Adaptive Graph Guided Disambiguation for Partial Label Learning
Deng-Bao Wang, Li Li, Min-Ling Zhang
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2019   Code  

The proposed method is shown to be effective in partial label disambiguation and considered as a strong baseline in PLL.

Honors

DAAD AInet Fellow (2023)
National Scholarship (2022, 2018)
Tencent Rhino-Bird Elite Training Program (2022)
Special Freshman Scholarship for PhD Students (2019)

Academic Services

Conference program committee member for ICLR (2024) ICML (2022, 2023, 2024), NeurIPS (2023), AAAI (2021, 2022, 2024), IJCAI (2022, 2023), KDD (2024).
Journal reviewer for IEEE TPAMI, SCIENCE CHINA Information Sciences, ACM TIST, ACM TKDD, IEEE TMM, JCST, Neurocomputing.