Introduction

In recent years, the field of natural language processing has considerably benefited from larger-scale models, better training strategies, and greater availability of data, exemplified by GPT-3 etc. It has been shown that these pre-trained language models can effectively characterize linguistic patterns in text and generate high-quality context-aware representations. However, these models are trained in a way that leverages only input-output pairs. As a result, These models struggle to capture external world knowledge such as named entities and their relations, common sense, and some domain-specific content. So knowledge is important for language representation and should be included into the training and inference of language models. Knowledge is also an indispensable component to enable higher levels of intelligence which is unattainable from statistical learning on input text patterns.

Important Dates

All deadlines are 11:59pm UTC -12h ("anywhere on Earth")

Paper Submission Deadline

Nov 4, 2022

Notification of Acceptance

Nov 18, 2022

Camera-ready deadline

December 2, 2022

Workshop Date

TBD: Feb. 13 or Feb. 14, 2023