ترغب بنشر مسار تعليمي؟ اضغط هنا

Pruned Continuous Haar Transform of 2D Polygonal Patterns with Application to VLSI Layouts

71   0   0.0 ( 0 )
 نشر من قبل Robin Scheibler
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce an algorithm for the efficient computation of the continuous Haar transform of 2D patterns that can be described by polygons. These patterns are ubiquitous in VLSI processes where they are used to describe design and mask layouts. There, speed is of paramount importance due to the magnitude of the problems to be solved and hence very fast algorithms are needed. We show that by techniques borrowed from computational geometry we are not only able to compute the continuous Haar transform directly, but also to do it quickly. This is achieved by massively pruning the transform tree and thus dramatically decreasing the computational load when the number of vertices is small, as is the case for VLSI layouts. We call this new algorithm the pruned continuous Haar transform. We implement this algorithm and show that for patterns found in VLSI layouts the proposed algorithm was in the worst case as fast as its discrete counterpart and up to 12 times faster.

قيم البحث

اقرأ أيضاً

We develop the pruned continuous Haar transform and the fast continuous Fourier series, two fast and efficient algorithms for rectilinear polygons. Rectilinear polygons are used in VLSI processes to describe design and mask layouts of integrated circ uits. The Fourier representation is at the heart of many of these processes and the Haar transform is expected to play a major role in techniques envisioned to speed up VLSI design. To ensure correct printing of the constantly shrinking transistors and simultaneously handle their increasingly large number, ever more computationally intensive techniques are needed. Therefore, efficient algorithms for the Haar and Fourier transforms are vital. We derive the complexity of both algorithms and compare it to that of discrete transforms traditionally used in VLSI. We find a significant reduction in complexity when the number of vertices of the polygons is small, as is the case in VLSI layouts. This analysis is completed by an implementation and a benchmark of the continuous algorithms and their discrete counterpart. We show that on tested VLSI layouts the pruned continuous Haar transform is 5 to 25 times faster, while the fast continuous Fourier series is 1.5 to 3 times faster.
67 - M Abily 2016
Technologies such as aerial photogrammetry allow production of 3D topographic data including complex environments such as urban areas. Therefore, it is possible to create High Resolution (HR) Digital Elevation Models (DEM) incorporating thin above gr ound elements influencing overland flow paths. Even though this category of big data has a high level of accuracy, there are still errors in measurements and hypothesis under DEM elaboration. Moreover, operators look for optimizing spatial discretization resolution in order to improve flood models computation time. Errors in measurement, errors in DEM generation, and operator choices for inclusion of this data within 2D hydraulic model, might influence results of flood models simulations. These errors and hypothesis may influence significantly flood modelling results variability. The purpose of this study is to investigate uncertainties related to (i) the own error of high resolution topographic data, and (ii) the modeller choices when including topographic data in hydraulic codes. The aim is to perform a Global Sensitivity Analysis (GSA) which goes through a Monte-Carlo uncertainty propagation, to quantify impact of uncertainties, followed by a Sobol indices computation, to rank influence of identified parameters on result variability. A process using a coupling of an environment for parametric computation (Prom{e}th{e}e) and a code relying on 2D shallow water equations (FullSWOF 2D) has been developed (P-FS tool). The study has been performed over the lower part of the Var river valley using the estimated hydrograph of 1994 flood event. HR topographic data has been made available for the study area, which is 17.5 km 2 , by Nice municipality. Three uncertain parameters were studied: the measurement error (var. E), the level of details of above-ground element representation in DEM (buildings, sidewalks, etc.) (var. S), and the spatial discretization resolution (grid cell size for regular mesh) (var. R). Parameter var. E follows a probability density function, whereas parameters var. S and var. R. are discrete operator choices. Combining these parameters, a database of 2, 000 simulations has been produced using P-FS tool implemented on a high performance computing structure. In our study case, the output of interest is the maximal
160 - Mengjia Xi , Hui Chen , Yuan Yuan 2019
Recently, ghost imaging has been attracting attentions because its mechanism would lead to many applications inaccessible to conventional imaging methods. However, it is challenging for high contrast and high resolution imaging, due to its low signal -to-noise ratio (SNR) and the demand of high sampling rate in detection. To circumvent these challenges, we here propose a ghost imaging scheme that exploits Haar wavelets as illuminating patterns with a bi-frequency light projecting system and frequency-selecting single-pixel detectors. This method provides a theoretically 100% image contrast and high detection SNR, which reduces the requirement of high dynamic range of detectors, enabling high resolution ghost imaging. Moreover, it can highly reduce the sampling rate (far below Nyquist limit) for a sparse object by adaptively abandoning unnecessary patterns during the measurement. These characteristics are experimentally verified with a resolution of 512 times 512 and a sampling rate lower than 5%. A high-resolution (1000 times 1000 times 1000) 3D reconstruction of an object is also achieved from multi-angle images.
Sparse tiling is a technique to fuse loops that access common data, thus increasing data locality. Unlike traditional loop fusion or blocking, the loops may have different iteration spaces and access shared datasets through indirect memory accesses, such as A[map[i]] -- hence the name sparse. One notable example of such loops arises in discontinuous-Galerkin finite element methods, because of the computation of numerical integrals over different domains (e.g., cells, facets). The major challenge with sparse tiling is implementation -- not only is it cumbersome to understand and synthesize, but it is also onerous to maintain and generalize, as it requires a complete rewrite of the bulk of the numerical computation. In this article, we propose an approach to extend the applicability of sparse tiling based on raising the level of abstraction. Through a sequence of compiler passes, the mathematical specification of a problem is progressively lowered, and eventually sparse-tiled C for-loops are generated. Besides automation, we advance the state-of-the-art by introducing: a revisited, more efficient sparse tiling algorithm; support for distributed-memory parallelism; a range of fine-grained optimizations for increased run-time performance; implementation in a publicly-available library, SLOPE; and an in-depth study of the performance impact in Seigen, a real-world elastic wave equation solver for seismological problems, which shows speed-ups up to 1.28x on a platform consisting of 896 Intel Broadwell cores.
Modelling of mechanical behaviour of pre-stressed fibre-reinforced composites is considered in a geometrically exact setting. A general approach which includes two different reference configurations is employed: one configuration corresponds to the l oad-free state of the structure and another one to the stress-free state of each material particle. The applicability of the approach is demonstrated in terms of a viscoelastic material model; both the matrix and the fibre are modelled using a multiplicative split of the deformation gradient tensor; a transformation rule for initial conditions is elaborated and specified. Apart from its simplicity, an important advantage of the approach is that well-established numerical algorithms can be used for pre-stressed inelastic structures. The interrelation between the advocated approach and the widely used opening angle approach is clarified. A full-scale FEM simulation confirms the main predictions of the opening angle approach. A locking effect is discovered; the effect is that in some cases the opening angle of the composite is essentially smaller than the opening angles of its individual layers. Thus, the standard cutting test typically used to analyse pre-stresses does not carry enough information and more refined experimental techniques are needed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا