Hanxun Huang

avatar.jpg

I am a Postdoctoral Research Fellow at the University of Melbourne, where I completed both my Ph.D. and Master’s degrees. Prior to that, I earned my Bachelor’s degree from Purdue University. My research focuses on Trustworthy AI and AI Safety, with an emphasis on developing secure and robust models for reliable, real-world AI applications.

🔭 My research mainly focus on:

  • Trustworthy ML
  • AI Safety
  • Generative AI


News

Jan 27, 2026 Our work on Universal and Transferable Jailbreak on VLMs is accepted by ICLR’26. Congrats to all the authors!
Sep 21, 2025 Our work on Benchmark for Backdoor Attacks and Defenses on LLMs is accepted by NeurIPS’25.
Sep 16, 2025 Our survey paper Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety has been published in Foundations and Trends® in Privacy and Security.
Aug 29, 2025 Our work on multi-trigger backdoor attacks has been accepted to TDSC.
Aug 03, 2025 Our work on Text-to-Unlearnable Example (T2UE) is accepted by ACM Multimedia 2025.
May 30, 2025 Our BackdoorLLM Benchmark was awarded first prize in the SafeBench Competition, organized by Center for AI Safety. Congratulations to all the authors on this outstanding achievement!
May 15, 2025 Our work on Super Transferable Adversarial Attack on CLIP is accepted by ICML’25.
Feb 27, 2025 Our work on million-scale adversarial robustness evalution is accepted by CVPR’25.

Professional Activities

  • Program Committee Member
    • ICML (2021-2026), NeurIPS (2021-2025), ICLR (2022-2026), CVPR (2024-2026), ACL (2025), ACM MM (2025), DIG-BUG at ICML (2025), AJCAI (2024) AROW at ECCV (2022).
  • Journal Reviewer
    • TPAMI, TMLR, TNNLS, TIP, TCSVT, TIFS, TDSC, TSMC, TIST, KAIS, JAIR, TOPS, Neural networks.