Recommender systems represents a class of systems designed to help individuals deal with information overload or incomplete
information. Such systems help individuals by providing recommendation through the use of various personalization
techniques
. Collaborative filtering is a widely used technique for rating prediction in recommender systems. This paper presents a
method uses preference relations instead of absolute ratings for similarity calculation.
The result indicates that the proposed method outperform the other methods such as the Somers Coefficient.
We aimed to distinguish between them and the other research areas such as information retrieval and data mining. we tried to determine the general structure of such systems which form a part of larger systems that have a mission to answer user querie
s based on the extracted information. we reviewed the different types of these systems, used techniques with them and tried to define the current and future challenges and the consequent research problems.
Finally we tried to discuss the details of the various
implementations of these systems by explaining two platforms Gate and OpenCalais and comparing between their information
extraction systems and discuss the results.
Recommendation systems are the systems thathelp users to select
suitable items from a large collection of items based on their tastes
and interests. Such systems have become one of the most powerful
tools in electronic commerce and social websites
. Nonetheless ,
using these systems in e-commerce websites faces many drawbacks
such as: cold start-up, scalability and sparsity.
In this paper, we present a solution to cold-start-up problem, and
compare between many association rule algorithms to select the
most suitable one to solve the scalability and sparsity problems.
أقترح ضمن هذا البحث مقارنة كل من طريقتي التصنيف المباشر و التصنيف غير المباشر و ذلك من خلال دراسة مقارنة كل من خوارزميتي RIPPER و الجار الأقرب, بهدف تحديد ما يناسب كل حالة من حالات المستخدمين المتغيرة مع الزمن.
The main goal of data mining process is to extract information and
discover knowledge from huge databases, where the clustering is
one of the most important functionalities which can be done in this
area. There are many of clustering algorithms an
d methods, but
determining or estimating the number of clusters which should be
extracted from a dataset is one of the most important issues most of
these methods encounter it. This research focuses on the problem of
estimating number of clusters in the case of agglomerative
hierarchical clustering. We present an evaluation of three of the
most common methods used in estimating number of clusters.
In this research, we define the concept of visual saliency in biology
and how it is described in computer science using the concept of
saliency maps, and how to use these maps to detect salient
objects in digital images. We also conduct experiment
s using
several algorithms to detect salient objects, and describe how to
quantify the quality of the results using clear and well-defined
standards.