ترغب بنشر مسار تعليمي؟ اضغط هنا

A Method to Support Difficult Re-finding Tasks

41   0   0.0 ( 0 )
 نشر من قبل Gangli Liu
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Re-finding electronic documents from a personal computer is a frequent demand to users. In a simple re-finding task, people can use many methods to retrieve a document, such as navigating directly to the documents folder, searching with a desktop search engine, or checking the Recent Files List. However, when encountering a difficult re-finding task, people usually cannot remember the attributes used by conventional re-finding methods, such as file path, file name, keywords etc., the re-finding would fail. We propose a new method to support difficult re-finding tasks. When a user is reading a document, we collect all kinds of possible memory pieces of the user about the document, such as number of pages, number of images, number of math formulas, cumulative reading time, reading frequency, printing experiences etc. If the user wants to re-find a document later, we use these collected attributes to filter out the target document. To alleviate the users cognitive burden, we use a question and answer wizard interface and provide recommendations to the answers for the user, the recommendations are generated by analyzing the collected attributes of each document and the users experiences about them.



قيم البحث

اقرأ أيضاً

Search conducted in a work context is an everyday activity that has been around since long before the Web was invented, yet we still seem to understand little about its general characteristics. With this paper we aim to contribute to a better underst anding of this large but rather multi-faceted area of `professional search. Unlike task-based studies that aim at measuring the effectiveness of search methods, we chose to take a step back by conducting a survey among professional searchers to understand their typical search tasks. By doing so we offer complementary insights into the subject area. We asked our respondents to provide actual search tasks they have worked on, information about how these were conducted and details on how successful they eventually were. We then manually coded the collection of 56 search tasks with task characteristics and relevance criteria, and used the coded dataset for exploration purposes. Despite the relatively small scale of this study, our data provides enough evidence that professional search is indeed very different from Web search in many key respects and that this is a field that offers many avenues for future research.
In Interactive Information Retrieval (IIR) experiments the users gaze motion on web pages is often recorded with eye tracking. The data is used to analyze gaze behavior or to identify Areas of Interest (AOI) the user has looked at. So far, tools for analyzing eye tracking data have certain limitations in supporting the analysis of gaze behavior in IIR experiments. Experiments often consist of a huge number of different visited web pages. In existing analysis tools the data can only be analyzed in videos or images and AOIs for every single web page have to be specified by hand, in a very time consuming process. In this work, we propose the reading protocol software which breaks eye tracking data down to the textual level by considering the HTML structure of the web pages. This has a lot of advantages for the analyst. First and foremost, it can easily be identified on a large scale what has actually been viewed and read on the stimuli pages by the subjects. Second, the web page structure can be used to filter to AOIs. Third, gaze data of multiple users can be presented on the same page, and fourth, fixation times on text can be exported and further processed in other tools. We present the software, its validation, and example use cases with data from three existing IIR experiments.
Grounding natural language instructions on the web to perform previously unseen tasks enables accessibility and automation. We introduce a task and dataset to train AI agents from open-domain, step-by-step instructions originally written for people. We build RUSS (Rapid Universal Support Service) to tackle this problem. RUSS consists of two models: First, a BERT-LSTM with pointers parses instructions to ThingTalk, a domain-specific language we design for grounding natural language on the web. Then, a grounding model retrieves the unique IDs of any webpage elements requested in ThingTalk. RUSS may interact with the user through a dialogue (e.g. ask for an address) or execute a web operation (e.g. click a button) inside the web runtime. To augment training, we synthesize natural language instructions mapped to ThingTalk. Our dataset consists of 80 different customer service problems from help websites, with a total of 741 step-by-step instructions and their corresponding actions. RUSS achieves 76.7% end-to-end accuracy predicting agent actions from single instructions. It outperforms state-of-the-art models that directly map instructions to actions without ThingTalk. Our user study shows that RUSS is preferred by actual users over web navigation.
Recommender Systems are especially challenging for marketplaces since they must maximize user satisfaction while maintaining the healthiness and fairness of such ecosystems. In this context, we observed a lack of resources to design, train, and evalu ate agents that learn by interacting within these environments. For this matter, we propose MARS-Gym, an open-source framework to empower researchers and engineers to quickly build and evaluate Reinforcement Learning agents for recommendations in marketplaces. MARS-Gym addresses the whole development pipeline: data processing, model design and optimization, and multi-sided evaluation. We also provide the implementation of a diverse set of baseline agents, with a metrics-driven analysis of them in the Trivago marketplace dataset, to illustrate how to conduct a holistic assessment using the available metrics of recommendation, off-policy estimation, and fairness. With MARS-Gym, we expect to bridge the gap between academic research and production systems, as well as to facilitate the design of new algorithms and applications.
Due to their promise of superior predictive power relative to human assessment, machine learning models are increasingly being used to support high-stakes decisions. However, the nature of the labels available for training these models often hampers the usefulness of predictive models for decision support. In this paper, we explore the use of historical expert decisions as a rich--yet imperfect--source of information, and we show that it can be leveraged to mitigate some of the limitations of learning from observed labels alone. We consider the problem of estimating expert consistency indirectly when each case in the data is assessed by a single expert, and propose influence functions based methodology as a solution to this problem. We then incorporate the estimated expert consistency into the predictive model meant for decision support through an approach we term label amalgamation. This allows the machine learning models to learn from experts in instances where there is expert consistency, and learn from the observed labels elsewhere. We show how the proposed approach can help mitigate common challenges of learning from observed labels alone, reducing the gap between the construct that the algorithm optimizes for and the construct of interest to experts. After providing intuition and theoretical results, we present empirical results in the context of child maltreatment hotline screenings. Here, we find that (1) there are high-risk cases whose risk is considered by the experts but not wholly captured in the target labels used to train a deployed model, and (2) the proposed approach improves recall for these cases.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا