Dialogsum Challenge: Results Of The Dialogue Summarization Shared Task
Naihao Deng, Yulong Chen, Yang Liu, Yue Zhang
GenChal - Thursday 07/21 10:30 EST
Abstract:
We report the results of DialogSum Challenge, the shared task on summarizing real-life scenario dialogues at INLG 2022. Four teams participate in this shared task and three submit their system reports, exploring different methods to improve the performance of dialogue summarization. Although there is a great improvement over the baseline models regarding automatic evaluation metrics, such as ROUGE scores, we find that there is a salient gap between model generated outputs and human annotated summaries by human evaluation from multiple aspects. These findings demonstrate the difficulty of dialogue summarization and suggest that more fine-grained evaluatuion metrics are in need.