ترغب بنشر مسار تعليمي؟ اضغط هنا

Sequenced Route Query with Semantic Hierarchy

123   0   0.0 ( 0 )
 نشر من قبل Yuya Sasaki
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The trip planning query searches for preferred routes starting from a given point through multiple Point-of-Interests (PoI) that match user requirements. Although previous studies have investigated trip planning queries, they lack flexibility for finding routes because all of them output routes that strictly match user requirements. We study trip planning queries that output multiple routes in a flexible manner. We propose a new type of query called skyline sequenced route (SkySR) query, which searches for all preferred sequenced routes to users by extending the shortest route search with the semantic similarity of PoIs in the route. Flexibility is achieved by the {it semantic hierarchy} of the PoI category. We propose an efficient algorithm for the SkySR query, bulk SkySR algorithm that simultaneously searches for sequenced routes and prunes unnecessary routes effectively. Experimental evaluations show that the proposed approach significantly outperforms the existing approaches in terms of response time (up to four orders of magnitude). Moreover, we develop a prototype service that uses the SkySR query, and conduct a user test to evaluate its usefulness.



قيم البحث

اقرأ أيضاً

Traditional relational data interfaces require precise structured queries over potentially complex schemas. These rigid data retrieval mechanisms pose hurdles for non-expert users, who typically lack language expertise and are unfamiliar with the det ails of the schema. Query by Example (QBE) methods offer an alternative mechanism: users provide examples of their intended query output and the QBE system needs to infer the intended query. However, these approaches focus on the structural similarity of the examples and ignore the richer context present in the data. As a result, they typically produce queries that are too general, and fail to capture the users intent effectively. In this paper, we present SQuID, a system that performs semantic similarity-aware query intent discovery. Our work makes the following contributions: (1) We design an end-to-end system that automatically formulates select-project-join queries in an open-world setting, with optional group-by aggregation and intersection operators; a much larger class than prior QBE techniques. (2) We express the problem of query intent discovery using a probabilistic abduction model, that infers a query as the most likely explanation of the provided examples. (3) We introduce the notion of an abduction-ready database, which precomputes semantic properties and related statistics, allowing SQuID to achieve real-time performance. (4) We present an extensive empirical evaluation on three real-world datasets, including user-intent case studies, demonstrating that SQuID is efficient and effective, and outperforms machine learning methods, as well as the state-of-the-art in the related query reverse engineering problem.
Graph-based data models allow for flexible data representation. In particular, semantic data based on RDF and OWL fuels use cases ranging from general knowledge graphs to domain specific knowledge in various technological or scientific domains. The f lexibility of such approaches, however, makes programming with semantic data tedious and error-prone. In particular the logics-based data descriptions employed by OWL are problematic for existing error-detecting techniques, such as type systems. In this paper, we present DOTSpa, an advanced integration of semantic data into programming. We embed description logics, the logical foundations of OWL, into the type checking process of a statically typed programming language and provide typed data access through an embedding of the query language SPARQL. In addition, we demonstrate a concrete implementation of the approach, by extending the Scala programming language. We qualitatively compare programs using our approach to equivalent programs using a state-of-the-art library, in terms of how both frameworks aid users in the handling of typical failure scenarios.
The broadening adoption of machine learning in the enterprise is increasing the pressure for strict governance and cost-effective performance, in particular for the common and consequential steps of model storage and inference. The RDBMS provides a n atural starting point, given its mature infrastructure for fast data access and processing, along with support for enterprise features (e.g., encryption, auditing, high-availability). To take advantage of all of the above, we need to address a key concern: Can in-RDBMS scoring of ML models match (outperform?) the performance of dedicated frameworks? We answer the above positively by building Raven, a system that leverages native integration of ML runtimes (i.e., ONNX Runtime) deep within SQL Server, and a unified intermediate representation (IR) to enable advanced cross-optimizations between ML and DB operators. In this optimization space, we discover the most exciting research opportunities that combine DB/Compiler/ML thinking. Our initial evaluation on real data demonstrates performance gains of up to 5.5x from the native integration of ML in SQL Server, and up to 24x from cross-optimizations--we will demonstrate Raven live during the conference talk.
Arising user-centric graph applications such as route planning and personalized social network analysis have initiated a shift of paradigms in modern graph processing systems towards multi-query analysis, i.e., processing multiple graph queries in pa rallel on a shared graph. These applications generate a dynamic number of localized queries around query hotspots such as popular urban areas. However, existing graph processing systems are not yet tailored towards these properties: The employed methods for graph partitioning and synchronization management disregard query locality and dynamism which leads to high query latency. To this end, we propose the system Q-Graph for multi-query graph analysis that considers query locality on three levels. (i) The query-aware graph partitioning algorithm Q-cut maximizes query locality to reduce communication overhead. (ii) The method for synchronization management, called hybrid barrier synchronization, allows for full exploitation of local queries spanning only a subset of partitions. (iii) Both methods adapt at runtime to changing query workloads in order to maintain and exploit locality. Our experiments show that Q-cut reduces average query latency by up to 57 percent compared to static query-agnostic partitioning algorithms.
84 - Tianyu Liu , Chi Wang 2020
We study the hardness of Approximate Query Processing (AQP) of various types of queries involving joins over multiple tables of possibly different sizes. In the case where the query result is a single value (e.g., COUNT, SUM, and COUNT(DISTINCT)), we prove worst-case information-theoretic lower bounds for AQP problems that are given parameters $epsilon$ and $delta$, and return estimated results within a factor of 1+$epsilon$ of the true results with error probability at most $delta$. In particular, the lower bounds for cardinality estimation over joins under various settings are contained in our results. Informally, our results show that for various database queries with joins, unless restricted to the set of queries whose results are always guaranteed to be above a very large threshold, the amount of information an AQP algorithm needs for returning an accurate approximation is at least linear in the number of rows in the largest table. Similar lower bounds even hold for some special cases where additional information such as top-K heavy hitters and all frequency vectors are available. In the case of GROUP-BY where the query result is not a single number, we study the lower bound for the amount of information used by any approximation algorithm that does not report any non-existing group and does not miss groups of large total size. Our work extends the work of Alon, Gibbons, Matias, and Szegedy [AGMS99].We compare our lower bounds with the amount of information required by Bernoulli sampling to give an accurate approximation. For COUNT queries with joins over multiple tables of the same size, the upper bound matches the lower bound, unless the problem setting is restricted to the set of queries whose results are always guaranteed to be above a very large threshold.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا