Do you want to publish a course? Click here

Gene-Patterns: Should Architecture be Customized for Each Application?

78   0   0.0 ( 0 )
 Added by Xiang Li
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Providing architectural support is crucial for newly arising applications to achieve high performance and high system efficiency. Currently there is a trend in designing accelerators for special applications, while arguably a debate is sparked whether we should customize architecture for each application. In this study, we introduce what we refer to as Gene-Patterns, which are the base patterns of diverse applications. We present a Recursive Reduce methodology to identify the hotspots, and a HOtspot Trace Suite (HOTS) is provided for the research community. We first extract the hotspot patterns, and then, remove the redundancy to obtain the base patterns. We find that although the number of applications is huge and ever-increasing, the amount of base patterns is relatively small, due to the similarity among the patterns of diverse applications. The similarity stems not only from the algorithms but also from the data structures. We build the Periodic Table of Memory Access Patterns (PT-MAP), where the indifference curves are analogous to the energy levels in physics, and memory performance optimization is essentially an energy level transition. We find that inefficiency results from the mismatch between some of the base patterns and the micro-architecture of modern processors. We have identified the key micro-architecture demands of the base patterns. The Gene-Pattern concept, methodology, and toolkit will facilitate the design of both hardware and software for the matching between architectures and applications.



rate research

Read More

Intuitively, obedience -- following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the humans underlying preferences can always perform better than a robot that simply follows the humans literal order. Thus, there is a tradeoff between the obedience of a robot and the value it can attain for its owner. We investigate how this tradeoff is impacted by the way the robot infers the humans preferences, showing that some methods err more on the side of obedience than others. We then analyze how performance degrades when the robot has a misspecified model of the features that the human cares about or the level of rationality of the human. Finally, we study how robots can start detecting such model misspecification. Overall, our work suggests that there might be a middle ground in which robots intelligently decide when to obey human orders, but err on the side of obedience.
The refined inertia of a square real matrix $A$ is the ordered $4$-tuple $(n_+, n_-, n_z, 2n_p)$, where $n_+$ (resp., $n_-$) is the number of eigenvalues of $A$ with positive (resp., negative) real part, $n_z$ is the number of zero eigenvalues of $A$, and $2n_p$ is the number of nonzero pure imaginary eigenvalues of $A$. For $n geq 3$, the set of refined inertias $mathbb{H}_n={(0, n, 0, 0), (0, n-2, 0, 2), (2, n-2, 0, 0)}$ is important for the onset of Hopf bifurcation in dynamical systems. We say that an $ntimes n$ sign pattern ${cal A}$ requires $mathbb{H}_n$ if $mathbb{H}_n={text{ri}(B) | B in Q({cal A})}$. Bodine et al. conjectured that no $ntimes n$ irreducible sign pattern that requires $mathbb{H}_n$ exists for $n$ sufficiently large, possibly $nge 8$. However, for each $n geq 4$, we identify three $ntimes n$ irreducible sign patterns that require $mathbb{H}_n$, which resolves this conjecture.
Cross-view geo-localization is to spot images of the same geographic target from different platforms, e.g., drone-view cameras and satellites. It is challenging in the large visual appearance changes caused by extreme viewpoint variations. Existing methods usually concentrate on mining the fine-grained feature of the geographic target in the image center, but underestimate the contextual information in neighbor areas. In this work, we argue that neighbor areas can be leveraged as auxiliary information, enriching discriminative clues for geolocalization. Specifically, we introduce a simple and effective deep neural network, called Local Pattern Network (LPN), to take advantage of contextual information in an end-to-end manner. Without using extra part estimators, LPN adopts a square-ring feature partition strategy, which provides the attention according to the distance to the image center. It eases the part matching and enables the part-wise representation learning. Owing to the square-ring partition design, the proposed LPN has good scalability to rotation variations and achieves competitive results on three prevailing benchmarks, i.e., University-1652, CVUSA and CVACT. Besides, we also show the proposed LPN can be easily embedded into other frameworks to further boost performance.
With many large science equipment constructing and putting into use, astronomy has stepped into the big data era. The new method and infrastructure of big data processing has become a new requirement of many astronomers. Cloud computing, Map/Reduce, Hadoop, Spark, etc. many new technology has sprung up in recent years. Comparing to the high performance computing(HPC), Data is the center of these new technology. So, a new computing architecture infrastructure is necessary, which can be shared by both HPC and big data processing. Based on Astronomy Cloud project of Chinese Virtual Observatory (China-VO), we have made much efforts to optimize the designation of the hybrid computing platform. which include the hardware architecture, cluster management, Job and Resource scheduling.
In a recent Letter [G. Chiribella et al., Phys. Rev. Lett. 98, 120501 (2007)], four protocols were proposed to secretly transmit a reference frame. Here We point out that in these protocols an eavesdropper can change the transmitted reference frame without being detected, which means the consistency of the shared reference frames should be reexamined. The way to check the above consistency is discussed. It is shown that this problem is quite different from that in previous protocols of quantum cryptography.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا