Haw-Shiuan Chang

張浩軒

profile-hschang-6.jpeg

I am a postdoc in the faculty job market now. My current position is a postdoctoral research associate at UMass Amherst Center for Intelligent Information Retrieval (CIIR), advised by Professor Hamed Zamani. I am broadly interested in developing intelligent systems that could assist humans in their creative process. Currently, I focus on fundamentally narrowing the gap between large language models (LLMs) and human intelligence without relying on scaling laws such as increasing model size or training data. Toward that end, I am particularly interested in
(1) enhancing the generation factuality, diversity, and novelty of LLMs by encouraging the usage of more causal inferences and fewer guesses,
(2) discovering LLMs’ limitations in their current architectures, decoding algorithms, evaluation, and training data, and
(3) overcoming the limitations through the techniques inspired by human behavior, machine learning (ML), and information retrieval (IR).

Previously, I was a postdoctoral scientist at Amazon AGI Foundations and worked with Professor Violet Peng, Professor Mohit Bansal, and Dr. Tagyoung Chung. I got my PhD at the University of Massachusetts Amherst, advised by Professor Andrew McCallum. Prior to PhD, I worked with Professor Yu-Chiang Frank Wang and Dr. Kuan-Ta Chen at Academia Sinica, Taiwan. I received my BS in the EECS Undergraduate Honors Program from National Yang Ming Chiao Tung University (NYCU), Taiwan.

Selected Publications

  1. EMNLP Oral
    Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM
    Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, and Tagyoung Chung
    In Conference on Empirical Methods in Natural Language Processing (EMNLP) (Oral) (🏆 Best Paper Nomination from a Reviewer), 2024
  2. EMNLP WS
    CS4: Measuring the Creativity of Large Language Models Automatically by Controlling the Number of Story-Writing Constraints
    Anirudh Atmakuru*, Jatin Nainani*, Rohith Siddhartha Reddy Bheemreddy*, Anirudh Lakkaraju*, Zonghai Yao, Hamed Zamani, and Haw-Shiuan Chang*
    In 6th Workshop on Narrative Understanding (WNU), 2024
  3. ArXiv
    REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy
    Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, and Tagyoung Chung
    arXiv preprint arXiv:2406.07735, 2024
  4. WSDM
    To Copy, or not to Copy; That is a Critical Issue of the Output Softmax Layer in Neural Sequential Recommenders
    Haw-Shiuan Chang, Nikhil Agarwal, and Andrew McCallum
    In Proceedings of The 17th ACM International Conference on Web Search and Data Mining, 2024
  5. ACL Findings
    Revisiting the Architectures like Pointer Networks to Efficiently Improve the Next Word Distribution, Summarization Factuality, and Beyond
    Haw-Shiuan Chang*, Zonghai Yao*, Alolika Gon, Hong Yu, and Andrew McCallum
    In Findings of the Association for Computational Linguistics: ACL 2023 (Findings of ACL), 2023
  6. ACL
    Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling
    Haw-Shiuan Chang*, Ruei-Yao Sun*, Kathryn Ricci*, and Andrew McCallum
    In Annual Meeting of the Association for Computational Linguistics (ACL), 2023
  7. ACL
    Softmax Bottleneck Makes Language Models Unable to Represent Multi-mode Word Distributions
    Haw-Shiuan Chang, and Andrew McCallum
    In Annual Meeting of the Association for Computational Linguistics (ACL), 2022
  8. ML
    Using Error Decay Prediction to Overcome Practical Issues of Deep Active Learning for Named Entity Recognition
    Haw-Shiuan Chang, Shankar Vembu, Sunil Mohan, Rheeya Uppaal, and Andrew McCallum
    Machine Learning, 2020