This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
QIN Libo, LI Zhouyang, LOU Jieming, YU Qiying, CHI Wanxiang. Review of research progress on natural language generation in task-based dialogue systems [J]. Journal of Chinese Information Technology, 2022.LiboQinZhouyangLiJiemingLouQiyingYuWanxiangChiReview of research progress on natural language generation in task-based dialogue systems [J]. Journal of Chinese Information Technology, 2022.Search in Google Scholar
ZHANG Xiaoyu, LI Dongdong, REN Pengjie, CHEN Zhumin, MA Jun, REN Zhaochun. Knowledge-aware medical dialogue generation based on memory network [J]. Computer Research and Development, 2022.XiaoyuZhangDongdongLiPengjieRenZhuminChenJunMaZhaochunRenKnowledge-aware medical dialogue generation based on memory network [J]. Computer Research and Development, 2022.Search in Google Scholar
Wen T H, Gasic M, Kim D, et al. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking [J]. 2015.WenT HGasicMKimDStochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking [J]. 2015.Search in Google Scholar
Wen T H, Gasic M, Mrksic N, et al. Semantically conditioned lstm-based natural language generation for spoken dialogue systems [J]. 1508.01745, 2015.WenT HGasicMMrksicNSemantically conditioned lstm-based natural language generation for spoken dialogue systems [J]. 1508.01745, 2015.Search in Google Scholar
Dušek O, Jurčíček F. Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings [J]. 2016.DušekOJurčíčekF.Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings [J].2016Search in Google Scholar
Dušek O, Jurčíček F. A context-aware natural language generator for dialogue systems [J]. 2016.DušekOJurčíčekF.A context-aware natural language generator for dialogue systems [J]. 2016.Search in Google Scholar
Tran V K, Nguyen L M. Neural-based natural language generation in dialogue using rnn encoder-decoder with semantic aggregation [J]. 2017.TranV KNguyenL M.Neural-based natural language generation in dialogue using rnn encoder-decoder with semantic aggregation [J]. 2017.Search in Google Scholar
Wei Z, Liu Q, Peng B, et al. Task-oriented dialogue system for automatic diagnosis[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2018: 201–207.WeiZLiuQPengBTask-oriented dialogue system for automatic diagnosis[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2018: 201–207.Search in Google Scholar
Su S Y, Huang C W, Chen Y N. Dual supervised learning for natural language understanding and generation [J]. 2019.SuS YHuangC WChenY N.Dual supervised learning for natural language understanding and generation [J]. 2019.Search in Google Scholar
Peng B, Zhu C, Li C, et al. Few-shot natural language generation for task-oriented dialog [J]. 2020.PengBZhuCLiCFew-shot natural language generation for task-oriented dialog [J]. 2020.Search in Google Scholar
Li Y, Yao K. Interpretable nlg for task-oriented dialogue systems with heterogeneous rendering machines[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(15): 13306–13314.LiYYaoK.Interpretable nlg for task-oriented dialogue systems with heterogeneous rendering machines[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(15): 13306–13314.Search in Google Scholar
Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks [J]. Advances in neural information processing systems, 2014, 27.SutskeverIVinyalsOLeQ V.Sequence to sequence learning with neural networks [J]. Advances in neural information processing systems, 2014, 27.Search in Google Scholar
Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training [J]. 2018.RadfordANarasimhanKSalimansTImproving language understanding by generative pre-training [J]. 2018.Search in Google Scholar
Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners [J]. 2019, 1(8).RadfordAWuJChildRLanguage models are unsupervised multitask learners [J]. 2019, 1(8).Search in Google Scholar