One representation per word - does it make sense for composition?

Kober, Thomas, Weeds, Julie, Wilkie, John, Reffin, Jeremy and Weir, David (2017) One representation per word - does it make sense for composition? Published in: Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications [Valencia, Spain, 3rd-7th April 2017]. 79-90. Association for Computational Linguistics

[img] PDF - Published Version
Restricted to SRO admin only

Download (277kB)

Abstract

In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.

Item Type: Conference Proceedings
Schools and Departments: School of Engineering and Informatics > Informatics
Research Centres and Groups: Data Science Research Group
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics > QA0075 Electronic computers. Computer science
Depositing User: Thomas Kober
Date Deposited: 19 Apr 2017 11:51
Last Modified: 19 Apr 2017 11:51
URI: http://sro.sussex.ac.uk/id/eprint/67435

View download statistics for this item

📧 Request an update