Prompt and instruction-based tuning for response generation in conversational quesition answering
Journal article, Peer reviewed
Published version
Date
2023Metadata
Show full item recordCollections
Original version
Lecture Notes in Computer Science (LNCS). 2023, 13913 156-169. 10.1007/978-3-031-35320-8_11Abstract
In recent years, prompt-based tuning and instruction-based tuning have emerged as popular approaches for natural language processing. In this paper, we investigate the application of prompt and instruction-based tuning approaches for response generation in conversational question answering. We approach this task from both extractive and generative angles, where we adopt prompt-based tuning for the extractive angle and instruction-based tuning for the generative angle. Additionally, we utilize multi-task learning to integrate these two angles. To evaluate the performance of our proposed approaches, we conduct experiments on the GPT-2 model. The results show that the approaches improve performance by
on F1 score over the baseline. We share our codes and data for reproducibility. (https://github.com/yujie-xing/Multi-Turn_QA_Prompt).