About this article
Published Online: Mar 30, 2018
Page range: 139 - 151
Received: Jul 01, 2017
Accepted: Oct 23, 2017
DOI: https://doi.org/10.2478/cait-2018-0012
Keywords
© 2018 Alexander Popov, published by De Gruyter Open
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.
The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for the effective distributed representation of linguistic units. Such representations – word embeddings, context embeddings, sense embeddings – can be effectively applied for WSD purposes, as they encode rich semantic information, especially in conjunction with recurrent neural networks, which are able to capture long-distance relations encoded in word order, syntax, information structuring.