ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparse p-Adic Data Coding for Computationally Efficient and Effective Big Data Analytics

400   0   0.0 ( 0 )
 نشر من قبل Fionn Murtagh
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Fionn Murtagh




اسأل ChatGPT حول البحث

We develop the theory and practical implementation of p-adic sparse coding of data. Rather than the standard, sparsifying criterion that uses the $L_0$ pseudo-norm, we use the p-adic norm. We require that the hierarchy or tree be node-ranked, as is standard practice in agglomerative and other hierarchical clustering, but not necessarily with decision trees. In order to structure the data, all computational processing operations are direct reading of the data, or are bounded by a constant number of direct readings of the data, implying linear computational time. Through p-adic sparse data coding, efficient storage results, and for bounded p-adic norm stored data, search and retrieval are constant time operations. Examples show the effectiveness of this new approach to content-driven encoding and displaying of data.



قيم البحث

اقرأ أيضاً

Next Generation Sequencing (NGS) technology has resulted in massive amounts of proteomics and genomics data. This data is of no use if it is not properly analyzed. ETL (Extraction, Transformation, Loading) is an important step in designing data analy tics applications. ETL requires proper understanding of features of data. Data format plays a key role in understanding of data, representation of data, space required to store data, data I/O during processing of data, intermediate results of processing, in-memory analysis of data and overall time required to process data. Different data mining and machine learning algorithms require input data in specific types and formats. This paper explores the data formats used by different tools and algorithms and also presents modern data formats that are used on Big Data Platform. It will help researchers and developers in choosing appropriate data format to be used for a particular tool or algorithm.
The relevance and importance of contextualizing data analytics is described. Qualitative characteristics might form the context of quantitative analysis. Topics that are at issue include: contrast, baselining, secondary data sources, supplementary da ta sources, dynamic and heterogeneous data. In geometric data analysis, especially with the Correspondence Analysis platform, various case studies are both experimented with, and are reviewed. In such aspects as paradigms followed, and technical implementation, implicitly and explicitly, an important point made is the major relevance of such work for both burgeoning analytical needs and for new analytical areas including Big Data analytics, and so on. For the general reader, it is aimed to display and describe, first of all, the analytical outcomes that are subject to analysis here, and then proceed to detail the more quantitative outcomes that fully support the analytics carried out.
In this era of Big Data, proficient use of data mining is the key to capture useful information from any dataset. As numerous data mining techniques make use of information theory concepts, in this paper, we discuss how Fisher information (FI) can be applied to analyze patterns in Big Data. The main advantage of FI is its ability to combine multiple variables together to inform us on the overall trends and stability of a system. It can therefore detect whether a system is losing dynamic order and stability, which may serve as a signal of an impending regime shift. In this work, we first provide a brief overview of Fisher information theory, followed by a simple step-by-step numerical example on how to compute FI. Finally, as a numerical demonstration, we calculate the evolution of FI for GDP per capita (current US Dollar) and total population of the USA from 1960 to 2013.
Decreasing transistor sizes and lower voltage swings cause two distinct problems for communication in integrated circuits. First, decreasing inter-wire spacing increases interline capacitive coupling, which adversely affects transmission energy and d elay. Second, lower voltage swings render the transmission susceptible to various noise sources. Coding can be used to address both these problems. So-called crosstalk-avoidance codes mitigate capacitive coupling, and traditional error-correction codes introduce resilience against channel errors. Unfortunately, crosstalk-avoidance and error-correction codes cannot be combined in a straightforward manner. On the one hand, crosstalk-avoidance encoding followed by error-correction encoding destroys the crosstalk-avoidance property. On the other hand, error-correction encoding followed by crosstalk-avoidance encoding causes the crosstalk-avoidance decoder to fail in the presence of errors. Existing approaches circumvent this difficulty by using additional bus wires to protect the parities generated from the output of the error-correction encoder, and are therefore inefficient. In this work we propose a novel joint crosstalk-avoidance and error-correction coding and decoding scheme that provides higher bus transmission rates compared to existing approaches. Our joint approach carefully embeds the parities such that the crosstalk-avoidance property is preserved. We analyze the rate and minimum distance of the proposed scheme. We also provide a density evolution analysis and predict iterative decoding thresholds for reliable communication under random bus erasures. This density evolution analysis is nonstandard, since the crosstalk-avoidance constraints are inherently nonlinear.
Delivering effective data analytics is of crucial importance to the interpretation of the multitude of biological datasets currently generated by an ever increasing number of high throughput techniques. Logic programming has much to offer in this are a. Here, we detail advances that highlight two of the strengths of logical formalisms in developing data analytic solutions in biological settings: access to large relational databases and building analytical pipelines collecting graph information from multiple sources. We present significant advances on the bio_db package which serves biological databases as Prolog facts that can be served either by in-memory loading or via database backends. These advances include modularising the underlying architecture and the incorporation of datasets from a second organism (mouse). In addition, we introduce a number of data analytics tools that operate on these datasets and are bundled in the analysis package: bio_analytics. Emphasis in both packages is on ease of installation and use. We highlight the general architecture of our components based approach. An experimental graphical user interface via SWISH for local installation is also available. Finally, we advocate that biological data analytics is a fertile area which can drive further innovation in applied logic programming.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا