Do you want to publish a course? Click here

In the field of tutoring systems, investigations have shown that there are many tutoring systems specific to a specific domain that, because of their static architecture, cannot be adapted to other domains. As consequence, often neither methods nor knowledge can be reused. In addition, the knowledge engineer must have programming skills in order to enhance and evaluate the system. One particular challenge is to tackle these problems with the development of a generic tutoring system. AnITA, as a stand-alone application, has been developed and implemented particularly for this purpose. However, in the testing phase, we discovered that this architecture did not fully match the users intuitive understanding of the use of a learning tool. Therefore, AnITA has been redesigned to exclusively work as a client/server application and renamed to AnITA2. This paper discusses the evolvements made on the AnITA tutoring system, the goal of which is to use generic principles for system re-use in any domain. Two experiments were conducted, and the results are presented in this paper.
Social media enabled a direct path from producer to consumer of contents changing the way users get informed, debate, and shape their worldviews. Such a {em disintermediation} weakened consensus on social relevant issues in favor of rumors, mistrust, and fomented conspiracy thinking -- e.g., chem-trails inducing global warming, the link between vaccines and autism, or the New World Order conspiracy. In this work, we study through a thorough quantitative analysis how different conspiracy topics are consumed in the Italian Facebook. By means of a semi-automatic topic extraction strategy, we show that the most discussed contents semantically refer to four specific categories: {em environment}, {em diet}, {em health}, and {em geopolitics}. We find similar patterns by comparing users activity (likes and comments) on posts belonging to different semantic categories. However, if we focus on the lifetime -- i.e., the distance in time between the first and the last comment for each user -- we notice a remarkable difference within narratives -- e.g., users polarized on geopolitics are more persistent in commenting, whereas the less persistent are those focused on diet related topics. Finally, we model users mobility across various topics finding that the more a user is active, the more he is likely to join all topics. Once inside a conspiracy narrative users tend to embrace the overall corpus.
P300 is an electric signal emitted by brain about 300 milliseconds after a rare, but relevant-for-the-user event. One of the applications of this signal is sentence spelling that enables subjects who lost the control of their motor pathways to communicate by selecting characters in a matrix containing all the alphabet symbols. Although this technology has made considerable progress in the last years, it still suffers from both low communication rate and high error rate. This article presents a P300 speller, named PolyMorph, that introduces two major novelties in the field: the selection matrix polymorphism, that reduces the size of the selection matrix itself by removing useless symbols, and sentence-based predictions, that exploit all the spelt characters of a sentence to determine the probability of a word. In order to measure the effectiveness of the presented speller, we describe two sets of tests: the first one in vivo and the second one in silico. The results of these experiments suggest that the use of PolyMorph in place of the naive character-by-character speller both increases the number of spelt characters per time unit and reduces the error rate.
Word clouds and text visualization is one of the recent most popular and widely used types of visualizations. Despite the attractiveness and simplicity of producing word clouds, they do not provide a thorough visualization for the distribution of the underlying data. Therefore, it is important to redesign word clouds for improving their design choices and to be able to do further statistical analysis on data. In this paper we have proposed a fully automatic redesigning algorithm for word cloud visualization. Our proposed method is able to decode an input word cloud visualization and provides the raw data in the form of a list of (word, value) pairs. To the best of our knowledge our work is the first attempt to extract raw data from word cloud visualization. We have tested our proposed method both qualitatively and quantitatively. The results of our experiments show that our algorithm is able to extract the words and their weights effectively with considerable low error rate.
170 - Han Yu , Zhiqi Shen , Qiong Wu 2014
Virtual companions that interact with users in a socially complex environment require a wide range of social skills. Displaying curiosity is simultaneously a factor to improve a companions believability and to unobtrusively affect the users activities over time. Curiosity represents a drive to know new things. It is a major driving force for engaging learners in active learning. Existing research work pays little attention in curiosity. In this paper, we enrich the social skills of a virtual companion by infusing curiosity into its mental model. A curious companion residing in a Virtual Learning Environment (VLE) to stimulate users curiosity is proposed. The curious companion model is developed based on multidisciplinary considerations. The effectiveness of the curious companion is demonstrated by a preliminary field study.
In information-rich environments, the competition for users attention leads to a flood of content from which people often find hard to sort out the most relevant and useful pieces. Using Twitter as a case study, we applied an attention economy solution to generate the most informative tweets for its users. By considering the novelty and popularity of tweets as objective measures of their relevance and utility, we used the Huberman-Wu algorithm to automatically select the ones that will receive the most attention in the next time interval. Their predicted popularity was confirmed by using Twitter data collected for a period of 2 months.
For large volumes of text data collected over time, a key knowledge discovery task is identifying and tracking clusters. These clusters may correspond to emerging themes, popular topics, or breaking news stories in a corpus. Therefore, recently there has been increased interest in the problem of clustering dynamic data. However, there exists little support for the interactive exploration of the output of these analysis techniques, particularly in cases where researchers wish to simultaneously explore both the change in cluster structure over time and the change in the textual content associated with clusters. In this paper, we propose a model for tracking dynamic clusters characterized by the evolutionary events of each cluster. Motivated by this model, the TextLuas system provides an implementation for tracking these dynamic clusters and visualizing their evolution using a metro map metaphor. To provide overviews of cluster content, we adapt the tag cloud representation to the dynamic clustering scenario. We demonstrate the TextLuas system on two different text corpora, where they are shown to elucidate the evolution of key themes. We also describe how TextLuas was applied to a problem in bibliographic network research.
Sequent calculus is widely used for formalizing proofs. However, due to the proliferation of data, understanding the proofs of even simple mathematical arguments soon becomes impossible. Graphical user interfaces help in this matter, but since they normally utilize Gentzens original notation, some of the problems persist. In this paper, we introduce a number of criteria for proof visualization which we have found out to be crucial for analyzing proofs. We then evaluate recent developments in tree visualization with regard to these criteria and propose the Sunburst Tree layout as a complement to the traditional tree structure. This layout constructs inferences as concentric circle arcs around the root inference, allowing the user to focus on the proofs structural content. Finally, we describe its integration into ProofTool and explain how it interacts with the Gentzen layout.
Effective data visualization is a key part of the discovery process in the era of big data. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multi dimensional information poses some deep questions in the field of cognition technology and human computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional desktop visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.
The increasing generation and collection of personal data has created a complex ecosystem, often collaborative but sometimes combative, around companies and individuals engaging in the use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Interaction (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer science, statistics, sociology, psychology and behavioural economics. We expose the challenges that HDI raises, organised into three core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interested parties in the personal and big data ecosystems.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا