Introduction
In recent years, the field of natural language processing has considerably benefited from larger-scale models, better training strategies, and greater availability of data, exemplified by GPT-3, ChatGPT, etc. It has been shown that these pre-trained language models can effectively characterize linguistic patterns in text and generate high-quality context-aware representations. However, these models are trained in a way that leverages only input-output pairs. As a result, These models struggle to capture external world knowledge such as named entities and their relations, common sense, and some domain-specific content. So knowledge is important for language representation and should be included into the training and inference of language models. Knowledge is also an indispensable component to enable higher levels of intelligence which is unattainable from statistical learning on input text patterns.
Important Dates
Paper Submission Deadline |
|
Notification of Acceptance |
|
Camera-ready deadline |
|
Workshop Date |
February 13, 2023 |