ترغب بنشر مسار تعليمي؟ اضغط هنا

On building an automated responding system for app reviews: What are the characteristics of reviews and their responses?

70   0   0.0 ( 0 )
 نشر من قبل Phong Minh Vu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recent studies showed that the dialogs between app developers and app users on app stores are important to increase user satisfaction and apps overall ratings. However, the large volume of reviews and the limitation of resources discourage app developers from engaging with customers through this channel. One solution to this problem is to develop an Automated Responding System for developers to respond to app reviews in a manner that is most similar to a human response. Toward designing such system, we have conducted an empirical study of the characteristics of mobile apps reviews and their human-written responses. We found that an app reviews can have multiple fragments at sentence level with different topics and intentions. Similarly, a response also can be divided into multiple fragments with unique intentions to answer certain parts of their review (e.g., complaints, requests, or information seeking). We have also identified several characteristics of review (rating, topics, intentions, quantitative text feature) that can be used to rank review by their priority of need for response. In addition, we identified the degree of re-usability of past responses is based on their context (single app, apps of the same category, and their common features). Last but not least, a responses can be reused in another review if some parts of it can be replaced by a placeholder that is either a named-entity or a hyperlink. Based on those findings, we discuss the implications of developing an Automated Responding System to help mobile apps developers write the responses for users reviews more effectively.

قيم البحث

اقرأ أيضاً

App reviews deliver user opinions and emerging issues (e.g., new bugs) about the app releases. Due to the dynamic nature of app reviews, topics and sentiment of the reviews would change along with app relea
Reader reviews of literary fiction on social media, especially those in persistent, dedicated forums, create and are in turn driven by underlying narrative frameworks. In their comments about a novel, readers generally include only a subset of charac ters and their relationships, thus offering a limited perspective on that work. Yet in aggregate, these reviews capture an underlying narrative framework comprised of different actants (people, places, things), their roles, and interactions that we label the consensus narrative framework. We represent this framework in the form of an actant-relationship story graph. Extracting this graph is a challenging computational problem, which we pose as a latent graphical model estimation problem. Posts and reviews are viewed as samples of sub graphs/networks of the hidden narrative framework. Inspired by the qualitative narrative theory of Greimas, we formulate a graphical generative Machine Learning (ML) model where nodes represent actants, and multi-edges and self-loops among nodes capture context-specific relationships. We develop a pipeline of interlocking automated methods to extract key actants and their relationships, and apply it to thousands of reviews and comments posted on Goodreads.com. We manually derive the ground truth narrative framework from SparkNotes, and then use word embedding tools to compare relationships in ground truth networks with our extracted networks. We find that our automated methodology generates highly accurate consensus narrative frameworks: for our four target novels, with approximately 2900 reviews per novel, we report average coverage/recall of important relationships of > 80% and an average edge detection rate of >89%. These extracted narrative frameworks can generate insight into how people (or classes of people) read and how they recount what they have read to others.
User reviews of mobile apps often contain complaints or suggestions which are valuable for app developers to improve user experience and satisfaction. However, due to the large volume and noisy-nature of those reviews, manually analyzing them for use ful opinions is inherently challenging. To address this problem, we propose MARK, a keyword-based framework for semi-automated review analysis. MARK allows an analyst describing his interests in one or some mobile apps by a set of keywords. It then finds and lists the reviews most relevant to those keywords for further analysis. It can also draw the trends over time of those keywords and detect their sudden changes, which might indicate the occurrences of serious issues. To help analysts describe their interests more effectively, MARK can automatically extract keywords from raw reviews and rank them by their associations with negative reviews. In addition, based on a vector-based semantic representation of keywords, MARK can divide a large set of keywords into more cohesive subsets, or suggest keywords similar to the selected ones.
Eclipse, an open source software project, acknowledges its donors by presenting donation badges in its issue tracking system Bugzilla. However, the rewarding effect of this strategy is currently unknown. We applied a framework of causal inference to investigate relative promptness of developer response to bug reports with donation badges compared with bug reports without the badges, and estimated that donation badges decreases developer response time by a median time of about two hours. The appearance of donation badges is appealing for both donors and organizers because of its practical, rewarding and yet inexpensive effect.
65 - Di Weng , Jichang Zhao 2020
Negative reviews, the poor ratings in postpurchase evaluation, play an indispensable role in e-commerce, especially in shaping future sales and firm equities. However, extant studies seldom examine their potential value for sellers and producers in e nhancing capabilities of providing better services and products. For those who exploited the helpfulness of reviews in the view of e-commerce keepers, the ranking approaches were developed for customers instead. To fill this gap, in terms of combining description texts and emotion polarities, the aim of the ranking method in this study is to provide the most helpful negative reviews under a certain product attribute for online sellers and producers. By applying a more reasonable evaluating procedure, experts with related backgrounds are hired to vote for the ranking approaches. Our ranking method turns out to be more reliable for ranking negative reviews for sellers and producers, demonstrating a better performance than the baselines like BM25 with a result of 8% higher. In this paper, we also enrich the previous understandings of emotions in valuing reviews. Specifically, it is surprisingly found that positive emotions are more helpful rather than negative emotions in ranking negative reviews. The unexpected strengthening from positive emotions in ranking suggests that less polarized reviews on negative experience in fact offer more rational feedbacks and thus more helpfulness to the sellers and producers. The presented ranking method could provide e-commerce practitioners with an efficient and effective way to leverage negative reviews from online consumers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا