Do you want to publish a course? Click here

Web Item Reviewing Made Easy By Leveraging Available User Feedback

202   0   0.0 ( 0 )
 Added by Azade Nazi
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

The widespread use of online review sites over the past decade has motivated businesses of all types to possess an expansive arsenal of user feedback to mark their reputation. Though a significant proportion of purchasing decisions are driven by average rating, detailed reviews are critical for activities like buying expensive digital SLR camera. Since writing a detailed review for an item is usually time-consuming, the number of reviews available in the Web is far from many. Given a user and an item our goal is to identify the top-$k$ meaningful phrases/tags to help her review the item easily. We propose general-constrained optimization framework based on three measures - relevance (how well the result set of tags describes an item), coverage (how well the result set of tags covers the different aspects of an item), and polarity (how well sentiment is attached to the result set of tags). By adopting different definitions of coverage, we identify two concrete problem instances that enable a wide range of real-world scenarios. We develop practical algorithms with theoretical bounds to solve these problems efficiently. We conduct experiments on synthetic and real data crawled from the web to validate the effectiveness of our solutions.

rate research

Read More

Recommender systems often use latent features to explain the behaviors of users and capture the properties of items. As users interact with different items over time, user and item features can influence each other, evolve and co-evolve over time. The compatibility of user and items feature further influence the future interaction between users and items. Recently, point process based models have been proposed in the literature aiming to capture the temporally evolving nature of these latent features. However, these models often make strong parametric assumptions about the evolution process of the user and item latent features, which may not reflect the reality, and has limited power in expressing the complex and nonlinear dynamics underlying these processes. To address these limitations, we propose a novel deep coevolutionary network model (DeepCoevolve), for learning user and item features based on their interaction graph. DeepCoevolve use recurrent neural network (RNN) over evolving networks to define the intensity function in point processes, which allows the model to capture complex mutual influence between users and items, and the feature evolution over time. We also develop an efficient procedure for training the model parameters, and show that the learned models lead to significant improvements in recommendation and activity prediction compared to previous state-of-the-arts parametric models.
The frequency of a web search keyword generally reflects the degree of public interest in a particular subject matter. Search logs are therefore useful resources for trend analysis. However, access to search logs is typically restricted to search engine providers. In this paper, we investigate whether search frequency can be estimated from a different resource such as Wikipedia page views of open data. We found frequently searched keywords to have remarkably high correlations with Wikipedia page views. This suggests that Wikipedia page views can be an effective tool for determining popular global web search trends.
We develop taggers for multi-pronged jets that are simple functions of jet substructure (so-called `subjettiness) variables. These taggers can be approximately decorrelated from the jet mass in a quite simple way. Specifically, we use a Logistic Regression Design (LoRD) which, even being one of the simplest machine learning classifiers, shows a performance which surpasses that of simple variables used by the ATLAS and CMS Collaborations and is not far from more complex models based on neural networks. Contrary to the latter, our method allows for an easy implementation of tagging tasks by providing a simple and interpretable analytical formula with already optimised parameters.
We use a modified version of the Peak Patch excursion set formalism to compute the mass and size distribution of QCD axion miniclusters from a fully non-Gaussian initial density field obtained from numerical simulations of axion string decay. We find strong agreement with N-Body simulations at a significantly lower computational cost. We employ a spherical collapse model and provide fitting functions for the modified barrier in the radiation era. The halo mass function at $z=629$ has a power-law distribution $M^{-0.6}$ for masses within the range $10^{-15}lesssim Mlesssim 10^{-10}M_{odot}$, with all masses scaling as $(m_a/50mumathrm{eV})^{-0.5}$. We construct merger trees to estimate the collapse redshift and concentration mass relation, $C(M)$, which is well described using analytical results from the initial power spectrum and linear growth. Using the calibrated analytic results to extrapolate to $z=0$, our method predicts a mean concentration $Csim mathcal{O}(text{few})times10^4$. The low computational cost of our method makes future investigation of the statistics of rare, dense miniclusters easy to achieve.
Visual imitation learning provides a framework for learning complex manipulation behaviors by leveraging human demonstrations. However, current interfaces for imitation such as kinesthetic teaching or teleoperation prohibitively restrict our ability to efficiently collect large-scale data in the wild. Obtaining such diverse demonstration data is paramount for the generalization of learned skills to novel scenarios. In this work, we present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots. We use commercially available reacher-grabber assistive tools both as a data collection device and as the robots end-effector. To extract action information from these visual demonstrations, we use off-the-shelf Structure from Motion (SfM) techniques in addition to training a finger detection network. We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task. For both tasks, we use standard behavior cloning to learn executable policies from the previously collected offline demonstrations. To improve learning performance, we employ a variety of data augmentations and provide an extensive analysis of its effects. Finally, we demonstrate the utility of our interface by evaluating on real robotic scenarios with previously unseen objects and achieve a 87% success rate on pushing and a 62% success rate on stacking. Robot videos are available at https://dhiraj100892.github.io/Visual-Imitation-Made-Easy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا