Haw-Shiuan Chang

張浩軒

profile-hschang-6.jpeg

I am a postdoc in the faculty job market now. My current position is a postdoctoral research associate at UMass Amherst Center for Intelligent Information Retrieval (CIIR), advised by Professor Hamed Zamani. I am broadly interested in developing intelligent systems that could assist humans in their creative process. Currently, I focus on fundamentally narrowing the gap between large language models (LLMs) and human intelligence without relying on scaling laws such as increasing model size or training data. Toward that end, I am particularly interested in
(1) enhancing the generation factuality, diversity, and novelty of LLMs by encouraging the usage of more causal inferences and fewer guesses,
(2) discovering LLMs’ limitations in their current architectures, decoding algorithms, evaluation, and training data, and
(3) overcoming the limitations through the techniques inspired by human behavior, machine learning (ML), and information retrieval (IR).

Previously, I was a postdoctoral scientist at Amazon AGI Foundations and worked with Professor Violet Peng, Professor Mohit Bansal, and Dr. Tagyoung Chung. I got my PhD at the University of Massachusetts Amherst, advised by Professor Andrew McCallum. Prior to PhD, I worked with Professor Yu-Chiang Frank Wang and Dr. Kuan-Ta Chen at Academia Sinica, Taiwan. I received my BS in the EECS Undergraduate Honors Program from National Yang Ming Chiao Tung University (NYCU), Taiwan.

Selected Publications

  1. ArXiv
    REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy
    Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, and Tagyoung Chung
    arXiv preprint arXiv:2406.07735, 2024
  2. WSDM
    To Copy, or not to Copy; That is a Critical Issue of the Output Softmax Layer in Neural Sequential Recommenders
    Haw-Shiuan Chang, Nikhil Agarwal, and Andrew McCallum
    In Proceedings of The 17th ACM International Conference on Web Search and Data Mining, 2024
  3. ACL Findings
    Revisiting the Architectures like Pointer Networks to Efficiently Improve the Next Word Distribution, Summarization Factuality, and Beyond
    Haw-Shiuan Chang*, Zonghai Yao*, Alolika Gon, Hong Yu, and Andrew McCallum
    In Findings of the Association for Computational Linguistics: ACL 2023 (Findings of ACL), 2023
  4. ACL
    Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling
    Haw-Shiuan Chang*, Ruei-Yao Sun*, Kathryn Ricci*, and Andrew McCallum
    In Annual Meeting of the Association for Computational Linguistics (ACL), 2023
  5. ACL
    Softmax Bottleneck Makes Language Models Unable to Represent Multi-mode Word Distributions
    Haw-Shiuan Chang, and Andrew McCallum
    In Annual Meeting of the Association for Computational Linguistics (ACL), 2022
  6. ML
    Using Error Decay Prediction to Overcome Practical Issues of Deep Active Learning for Named Entity Recognition
    Haw-Shiuan Chang, Shankar Vembu, Sunil Mohan, Rheeya Uppaal, and Andrew McCallum
    Machine Learning, 2020
  7. EDM Short
    Modeling Exercise Relationships in E-Learning: A Unified Approach
    Haw-Shiuan Chang, Hwai-Jung Hsu, and Kuan-Ta Chen
    In International Conference on Educational Data Mining (EDM), 2015