Hongyin Luo
Research Scientist - Computational
Hongyin completed his Ph.D. at MIT EECS in May 2022, working on self-training for natural language processing. Now he is a postdoctoral associate at the spoken language systems group (SLS) of MIT CSAIL. His research interests focus on natural language processing. In detail, he is interested in exploring semantic representation models that help computers understand and generate natural languages better. He has been working on interpretable word representation learning, deep neural networks, co-reference resolution, and other NLP applications.
Related Links
Last updated Jul 16 '24
Research Areas
Impact Areas
Related Links
Publications
Luo, Hongyin and Li, Shang-Wen and Yu, Seunghak and Glass, James
Cooperative Learning of Zero-Shot Machine Reading Comprehension
arXiv preprint arXiv:2103.07449, 2021
Luo, Hongyin and Jiang, Lan and Belinkov, Yonatan and Glass, James
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) 2019
Luo, Hongyin and Glass, Jim
Learning Word Representations with Cross-Sentence Dependency for End-to-End Co-reference Resolution
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Fu, Jie and Luo, Hongyin and Feng, Jiashi and Low, Kian Hsiang and Chua, Tat-Seng
Drmad: Distilling reverse-mode automatic differentiation for optimizing hyperparameters of deep neural networks
arXiv preprint arXiv:1601.00917, 2016
Luo, Hongyin and Liu, Zhiyuan and Luan, Huanbo and Sun, Maosong
Online learning of interpretable word embeddings
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
Luo, Hongyin and Li, Shang-Wen and Glass, James
Prototypical Q Networks for Automatic Conversational Diagnosis and Few-Shot New Disease Adaption
Interspeech 2020, 2020
Luo, Hongyin and Glass, James and Lalwani, Garima and Zhang, Yi and Li, Shang-Wen
Joint Retrieval-Extraction Training for Evidence-Aware Dialog Response Selection
Proc. Interspeech 2021, 2021