نقدم Query2Prod2VEC، وهو نموذج يسبب تمثيلات معجمية للبحث عن المنتج في تضمين المنتج: في نموذجنا، يعني المعنى رسم خرائط بين الكلمات والمساحة الكامنة من المنتجات في متجر رقمي.نستفيد من جلسات التسوق لتعلم المساحة الأساسية واستخدام التعليقات التوضيحية للتجارة لبناء التظليلات المعجمية للتقييم: تظهر تجاربنا أن طرازنا أكثر دقة من التقنيات المعروفة من أدب NLP و IR.أخيرا، نشدد على أهمية كفاءة البيانات للبحث عن المنتج خارج عمالقة البيع بالتجزئة، وتسليط الضوء على كيفية تناسب Query2Prod2VEC قيودا عملية التي يواجهها معظم الممارسين.
We present Query2Prod2Vec, a model that grounds lexical representations for product search in product embeddings: in our model, meaning is a mapping between words and a latent space of products in a digital shop. We leverage shopping sessions to learn the underlying space and use merchandising annotations to build lexical analogies for evaluation: our experiments show that our model is more accurate than known techniques from the NLP and IR literature. Finally, we stress the importance of data efficiency for product search outside of retail giants, and highlight how Query2Prod2Vec fits with practical constraints faced by most practitioners.
References used
https://aclanthology.org/
Word Embedding maps words to vectors of real numbers. It is derived from a large corpus and is known to capture semantic knowledge from the corpus. Word Embedding is a critical component of many state-of-the-art Deep Learning techniques. However, gen
Identifying intertextual relationships between authors is of central importance to the study of literature. We report an empirical analysis of intertextuality in classical Latin literature using word embedding models. To enable quantitative evaluatio
We introduce a new approach for smoothing and improving the quality of word embeddings. We consider a method of fusing word embeddings that were trained on the same corpus but with different initializations. We project all the models to a shared vect
Detecting lexical semantic change in smaller data sets, e.g. in historical linguistics and digital humanities, is challenging due to a lack of statistical power. This issue is exacerbated by non-contextual embedding models that produce one embedding
In the paper, we deal with the problem of unsupervised text document clustering for the Polish language. Our goal is to compare the modern approaches based on language modeling (doc2vec and BERT) with the classical ones, i.e., TF-IDF and wordnet-base