University of Sussex
Browse

File(s) under permanent embargo

One representation per word - does it make sense for composition?

conference contribution
posted on 2023-06-09, 05:50 authored by Thomas Kober, Julie WeedsJulie Weeds, John Wilkie, Jeremy ReffinJeremy Reffin, David WeirDavid Weir
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.

History

Publication status

  • Published

File Version

  • Published version

Journal

Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications [Valencia, Spain, 3rd-7th April 2017]

Publisher

Association for Computational Linguistics

Page range

79-90

Event type

workshop

Department affiliated with

  • Informatics Publications

Research groups affiliated with

  • Data Science Research Group Publications

Full text available

  • No

Peer reviewed?

  • Yes

Legacy Posted Date

2017-04-19

First Compliant Deposit (FCD) Date

2017-04-19

Usage metrics

    University of Sussex (Publications)

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC