Yichen Jiang (姜翌辰)

I'm a 3rd-year PhD student at Department of Computer Science at University of North Carolina at Chapel Hill. I am advised by Prof. Mohit Bansal and work in UNC-NLP Research Group. My research focuses on analyzing and improving neural networks' ability to understand the compositional structures underlying natural language, as well as interpretability and adversarial robustness of multi-hop reasoning. In the past, I did my BS + MS at UNC Chapel Hill and interned at Microsoft Research and Facebook AI. I'm supported by the Apple Scholars in AI/ML PhD fellowship.

Email  /  CV  /  Github  /  Google Scholar  /  LinkedIn

Research

My research focuses on analyzing and improving the neural network’s ability to understand the compositional structures underlying natural language sentences. In the past, I showed how existing models lack compositionality and take reasoning shortcuts. I then designed interpretable and modular models that can answer complex multi-hop questions more robustly and also collected a multi-hop fact verification dataset HoVer to motivate future work. I also incorporated Tensor-Product into a Transformer for better abstractive summarization. My ultimate goal is to build AI systems that can compositionally recombine structures and contents in understanding natural language and comprehending this world.

Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Yichen Jiang*, Xiang Zhou*, and Mohit Bansal
Proceedings of EMNLP 2022
Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks
Yichen Jiang and Mohit Bansal
Proceedings of EMNLP 2021
arxiv / code / bibtex
Learning and Analyzing Generation Order for Undirected Sequence Models
Yichen Jiang and Mohit Bansal
Findings of EMNLP 2021
arxiv / code / bibtex
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, Paul Smolensky
Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021)
paper / bibtex
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulus, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao
Proceedings of NAACL-HLT 2021
arxiv / code / bibtex
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
Yichen Jiang*, Shikha Bordia*, Zheng Zhong, Charles Dognin, Maneesh Singh, Mohit Bansal
Findings of EMNLP 2020
arxiv / data+code / bibtex
Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning
Yichen Jiang, Mohit Bansal
Proceedings of EMNLP 2019, Hong Kong, China
arxiv / code / bibtex
Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA
Yichen Jiang, Mohit Bansal
Proceedings of ACL 2019, Florence, Italy
arxiv / code / slides / bibtex
Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension
Yichen Jiang*, Nitish Joshi*, Yen-chun Chen, and Mohit Bansal
Proceedings of ACL 2019, Florence, Italy
arxiv / code / slides / bibtex
Closed-book Training to Improve Summarization Encoder Memory
Yichen Jiang, Mohit Bansal
Proceedings of EMNLP 2018, Brussels, Belgium
arxiv / poster / bibtex
Work/Intern Experience
Facebook AI 2021 May - 2021 August Supervised by Dr. Barlas Oguz, Dr. Scott Yih, and Dr. Yashar Mehdad
Microsoft Research, Redmond 2020 June - 2020 August Supervised by Dr. Asli Celikyilmaz and Prof. Paul Smolensky
Verisk Analytics 2019 May - 2019 August Supervised by Dr. Maneesh Singh

Thanks for providing the template source code for this website