ترغب بنشر مسار تعليمي؟ اضغط هنا

A New Approach to Speed up Combinatorial Search Strategies Using Stack and Hash Table

59   0   0.0 ( 0 )
 نشر من قبل Bestoun Ahmed Dr.
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Owing to the significance of combinatorial search strategies both for academia and industry, the introduction of new techniques is a fast growing research field these days. These strategies have really taken different forms ranging from simple to complex strategies in order to solve all forms of combinatorial problems. Nonetheless, despite the kind of problem these approaches solve, they are prone to heavy computation with the number of combinations and growing search space dimensions. This paper presents a new approach to speed up the generation and search processes using a combination of stack and hash table data structures. This approach could be put to practice for the combinatorial approaches to speed up the generation of combinations and search process in the search space. Furthermore, this new approach proved its performance in diverse stages better than other known strategies.



قيم البحث

اقرأ أيضاً

Microservice architecture refers to the use of numerous small-scale and independently deployed services, instead of encapsulating all functions into one monolith. It has been a challenge in software engineering to decompose a monolithic system into s maller parts. In this paper, we propose the Feature Table approach, a structured approach to service decomposition based on the correlation between functional features and microservices: (1) we defined the concept of {em Feature Cards} and 12 instances of such cards; (2) we formulated {em Decomposition Rules} to decompose monolithic applications; (3) we designed the {em Feature Table Analysis Tool} to provide semi-automatic analysis for identification of microservices; and (4) we formulated {em Mapping Rules} to help developers implement microservice candidates. We performed a case study on Cargo Tracking System to validate our microservice-oriented decomposition approach. Cargo Tracking System is a typical case that has been decomposed by other related methods (dataflow-driven approach, Service Cutter, and API Analysis). Through comparison with the related methods in terms of specific coupling and cohesion metrics, the results show that the proposed Feature Table approach can deliver more reasonable microservice candidates, which are feasible in implementation with semi-automatic support.
Hash table is a fundamental data structure for quick search and retrieval of data. It is a key component in complex graph analytics and AI/ML applications. State-of-the-art parallel hash table implementations either make some simplifying assumptions such as supporting only a subset of hash table operations or employ optimizations that lead to performance that is highly data dependent and in the worst case can be similar to a sequential implementation. In contrast, in this work we develop a dynamic hash table that supports all the hash table queries - search, insert, delete, update, while allowing us to support p parallel queries (p>1) per clock cycle via p processing engines (PEs) in the worst case i.e. the performance is data agnostic. We achieve this by implementing novel XOR based multi-ported block memories on FPGAs. Additionally, we develop a technique to optimize the memory requirement of the hash table if the ratio of search to insert/update/delete queries is known beforehand. We implement our design on state-of-the-art FPGA devices. Our design is scalable to 16 PEs and supports throughput up to 5926 MOPS. It matches the throughput of the state-of-the-art hash table design - FASTHash, which only supports search and insert operations. Comparing with the best FPGA design that supports the same set of operations, our hash table achieves up to 12.3x speedup.
Latent Dirichlet allocation (LDA) is a widely-used probabilistic topic modeling paradigm, and recently finds many applications in computer vision and computational biology. In this paper, we propose a fast and accurate batch algorithm, active belief propagation (ABP), for training LDA. Usually batch LDA algorithms require repeated scanning of the entire corpus and searching the complete topic space. To process massive corpora having a large number of topics, the training iteration of batch LDA algorithms is often inefficient and time-consuming. To accelerate the training speed, ABP actively scans the subset of corpus and searches the subset of topic space for topic modeling, therefore saves enormous training time in each iteration. To ensure accuracy, ABP selects only those documents and topics that contribute to the largest residuals within the residual belief propagation (RBP) framework. On four real-world corpora, ABP performs around $10$ to $100$ times faster than state-of-the-art batch LDA algorithms with a comparable topic modeling accuracy.
The joint task of bug localization and program repair is an integral part of the software development process. In this work we present DeepDebug, an approach to automated debugging using large, pretrained transformers. We begin by training a bug-crea tion model on reversed commit data for the purpose of generating synthetic bugs. We apply these synthetic bugs toward two ends. First, we directly train a backtranslation model on all functions from 200K repositories. Next, we focus on 10K repositories for which we can execute tests, and create bug
Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks. Benefiting from multiple pretraining tasks and large scale training corpora, pretrained models can capture comp lex syntactic word relations. In this paper, we use the deep contextualized language model BERT for the task of ad hoc table retrieval. We investigate how to encode table content considering the table structure and input length limit of BERT. We also propose an approach that incorporates features from prior literature on table retrieval and jointly trains them with BERT. In experiments on public datasets, we show that our best approach can outperform the previous state-of-the-art method and BERT baselines with a large margin under different evaluation metrics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا