Do you want to publish a course? Click here

An Approach to Data Prefetching Using 2-Dimensional Selection Criteria

158   0   0.0 ( 0 )
 Added by Josef Spjut
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

We propose an approach to data memory prefetching which augments the standard prefetch buffer with selection criteria based on performance and usage pattern of a given instruction. This approach is built on top of a pattern matching based prefetcher, specifically one which can choose between a stream, a stride, or a stream followed by a stride. We track the most recently called instructions to make a decision on the quantity of data to prefetch next. The decision is based on the frequency with which these instructions are called and the hit/miss rate of the prefetcher. In our approach, we separate the amount of data to prefetch into three categories: a high degree, a standard degree and a low degree. We ran tests on different values for the high prefetch degree, standard prefetch degree and low prefetch degree to determine that the most optimal combination was 1, 4, 8 lines respectively. The 2 dimensional selection criteria improved the performance of the prefetcher by up to 9.5% over the first data prefetching championship winner. Unfortunately performance also fell by as much as 14%, but remained similar on average across all of the benchmarks we tested.



rate research

Read More

Prior work has observed that fetch-directed prefetching (FDIP) is highly effective at covering instruction cache misses. The key to FDIPs effectiveness is having a sufficiently large BTB to accommodate the applications branch working set. In this work, we introduce several optimizations that significantly extend the reach of the BTB within the available storage budget. Our optimizations target nearly every source of storage overhead in each BTB entry; namely, the tag, target address, and size fields. We observe that while most dynamic branch instances have short offsets, a large number of branches has longer offsets or requires the use of full target addresses. Based on this insight, we break-up the BTB into multiple smaller BTBs, each storing offsets of different length. This enables a dramatic reduction in storage for target addresses. We further compress tags to 16 bits and avoid the use of the basic-block-oriented BTB advocated in prior FDIP variants. The latter optimization eliminates the need to store the basic block size in each BTB entry. Our final design, called FDIP-X, uses an ensemble of 4 BTBs and always outperforms conventional FDIP with a unified basic-block-oriented BTB for equal storage budgets.
169 - Liya Fu , Jiaqi Li , You-Gan Wang 2020
This paper proposes a new robust smooth-threshold estimating equation to select important variables and automatically estimate parameters for high dimensional longitudinal data. A novel working correlation matrix is proposed to capture correlations within the same subject. The proposed procedure works well when the number of covariates p increases as the number of subjects n increases. The proposed estimates are competitive with the estimates obtained with the true correlation structure, especially when the data are contaminated. Moreover, the proposed method is robust against outliers in the response variables and/or covariates. Furthermore, the oracle properties for robust smooth-threshold estimating equations under large n, diverging p are established under some regularity conditions. Extensive simulation studies and a yeast cell cycle data are used to evaluate the performance of the proposed method, and results show that our proposed method is competitive with existing robust variable selection procedures.
The increasing volumes of astronomical data require practical methods for data exploration, access and visualisation. The Hierarchical Progressive Survey (HiPS) is a HEALPix based scheme that enables a multi-resolution approach to astronomy data from the individual pixels up to the whole sky. We highlight the decisions and approaches that have been taken to make this scheme a practical solution for managing large volumes of heterogeneous data. Early implementors of this system have formed a network of HiPS nodes, with some 250 diverse data sets currently available, with multiple mirror implementations for important data sets. This hierarchical approach can be adapted to expose Big Data in different ways. We describe how the ease of implementation, and local customisation of the Aladin Lite embeddable HiPS visualiser have been keys for promoting collaboration on HiPS.
In this paper, we present a case study demonstrating how dynamic and uncertain criteria can be incorporated into a multicriteria analysis with the help of discrete event simulation. The simulation guided multicriteria analysis can include both monetary and non-monetary criteria that are static or dynamic, whereas standard multi criteria analysis only deals with static criteria and cost benefit analysis only deals with static monetary criteria. The dynamic and uncertain criteria are incorporated by using simulation to explore how the decision options perform. The results of the simulation are then fed into the multicriteria analysis. By enabling the incorporation of dynamic and uncertain criteria, the dynamic multiple criteria analysis was able to take a unique perspective of the problem. The highest ranked option returned by the dynamic multicriteria analysis differed from the other decision aid techniques.
Recommender systems are being employed across an increasingly diverse set of domains that can potentially make a significant social and individual impact. For this reason, considering fairness is a critical step in the design and evaluation of such systems. In this paper, we introduce HyperFair, a general framework for enforcing soft fairness constraints in a hybrid recommender system. HyperFair models integrate variations of fairness metrics as a regularization of a joint inference objective function. We implement our approach using probabilistic soft logic and show that it is particularly well-suited for this task as it is expressive and structural constraints can be added to the system in a concise and interpretable manner. We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template; second as a fair retrofitting technique that can be used to improve the fairness of predictions from a black-box model. We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender. We also run experiments showing the effectiveness of our methods for the task of retrofitting a black-box model and the trade-off between the amount of fairness enforced and the prediction performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا