ترغب بنشر مسار تعليمي؟ اضغط هنا

User Reviews and Language: How Language Influences Ratings

177   0   0.0 ( 0 )
 نشر من قبل Scott A. Hale
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Scott A. Hale




اسأل ChatGPT حول البحث

The number of user reviews of tourist attractions, restaurants, mobile apps, etc. is increasing for all languages; yet, research is lacking on how reviews in multiple languages should be aggregated and displayed. Speakers of different languages may have consistently different experiences, e.g., different information available in different languages at tourist attractions or different user experiences with software due to internationalization/localization choices. This paper assesses the similarity in the ratings given by speakers of different languages to London tourist attractions on TripAdvisor. The correlations between different languages are generally high, but some language pairs are more correlated than others. The results question the common practice of computing average ratings from reviews in many languages.

قيم البحث

اقرأ أيضاً

192 - Scott A. Hale , Irene Eleta 2017
The number and quality of user reviews greatly affects consumer purchasing decisions. While reviews in all languages are increasing, it is still often the case (especially for non-English speakers) that there are only a few reviews in a persons first language. Using an online experiment, we examine the value that potential purchasers receive from interfaces showing additional reviews in a second language. The results paint a complicated picture with both positive and negative reactions to the inclusion of foreign-language reviews. Roughly 26-28% of subjects clicked to see translations of the foreign-language content when given the opportunity, and those who did so were more likely to select the product with foreign-language reviews than those who did not.
In the attention economy, video apps employ design mechanisms like autoplay that exploit psychological vulnerabilities to maximize watch time. Consequently, many people feel a lack of agency over their app use, which is linked to negative life effect s such as loss of sleep. Prior design research has innovated external mechanisms that police multiple apps, such as lockout timers. In this work, we shift the focus to how the internal mechanisms of an app can support user agency, taking the popular YouTube mobile app as a test case. From a survey of 120 U.S. users, we find that autoplay and recommendations primarily undermine sense of agency, while search and playlists support it. From 13 co-design sessions, we find that when users have a specific intention for how they want to use YouTube they prefer interfaces that support greater agency. We discuss implications for how designers can help users reclaim a sense of agency over their media use.
Programming language design requires making many usability-related design decisions. However, existing HCI methods can be impractical to apply to programming languages: they have high iteration costs, programmers require significant learning time, an d user performance has high variance. To address these problems, we adapted both formative and summative HCI methods to make them more suitable for programming language design. We integrated these methods into a new process, PLIERS, for designing programming languages in a user-centered way. We evaluated PLIERS by using it to design two new programming languages. Glacier extends Java to enable programmers to express immutability properties effectively and easily. Obsidian is a language for blockchains that includes verification of critical safety properties. Summative usability studies showed that programmers were able to program effectively in both languages after short training periods.
Limited linguistic coverage for Intelligent Personal Assistants (IPAs) means that many interact in a non-native language. Yet we know little about how IPAs currently support or hinder these users. Through native (L1) and non-native (L2) English speak ers interacting with Google Assistant on a smartphone and smart speaker, we aim to understand this more deeply. Interviews revealed that L2 speakers prioritised utterance planning around perceived linguistic limitations, as opposed to L1 speakers prioritising succinctness because of system limitations. L2 speakers see IPAs as insensitive to linguistic needs resulting in failed interaction. L2 speakers clearly preferred using smartphones, as visual feedback supported diagnoses of communication breakdowns whilst allowing time to process query results. Conversely, L1 speakers preferred smart speakers, with audio feedback being seen as sufficient. We discuss the need to tailor the IPA experience for L2 users, emphasising visual feedback whilst reducing the burden of language production.
Shouldnt language and vision features be treated equally in vision-language (VL) tasks? Many VL approaches treat the language component as an afterthought, using simple language models that are either built upon fixed word embeddings trained on text- only data or are learned from scratch. We believe that language features deserve more attention, and conduct experiments which compare different word embeddings, language models, and embedding augmentation steps on five common VL tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. Our experiments provide some striking results; an average embedding language model outperforms an LSTM on retrieval-style tasks; state-of-the-art representations such as BERT perform relatively poorly on vision-language tasks. From this comprehensive set of experiments we propose a set of best practices for incorporating the language component of VL tasks. To further elevate language features, we also show that knowledge in vision-language problems can be transferred across tasks to gain performance with multi-task training. This multi-task training is applied to a new Graph Oriented Vision-Language Embedding (GrOVLE), which we adapt from Word2Vec using WordNet and an original visual-language graph built from Visual Genome, providing a ready-to-use vision-language embedding: http://ai.bu.edu/grovle.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا