Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity

Authors: 
Navid Rekabsaz
Mihai Lupu
Allan Hanbury
Type: 
Speech without proceedings
Proceedings: 
Publisher: 
Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval, Pisa
Pages: 
ISBN: 
Year: 
2016
Abstract: 
Word embedding, specially with its recent developments, promises a quantification of the similarity between terms. However, it is not clear to which extent this similarity value can be genuinely mean- ingful and useful for subsequent tasks. We explore how the sim- ilarity score obtained from the models is really indicative of term relatedness. We first observe and quantify the uncertainty factor of the word embedding models regarding to the similarity value. Based on this factor, we introduce a general threshold on various dimensions which effectively filters the highly related terms. Our evaluation on four information retrieval collections supports the ef- fectiveness of our approach as the results of the introduced thresh- old are significantly better than the baseline while being equal to or statistically indistinguishable from the optimal results.
TU Focus: 
Computational Science and Engineering
Reference: 

N. Rekabsaz, M. Lupu, A. Hanbury:
"Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity";
Vortrag: Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval, Pisa; 21.07.2016.

Zusätzliche Informationen

Last changed: 
13.01.2017 11:02:15
TU Id: 
257516
Accepted: 
Accepted
Invited: 
Department Focus: 
Business Informatics
Abstract German: 
Author List: 
N. Rekabsaz, M. Lupu, A. Hanbury