Do you want to publish a course? Click here

Pareto-Optimization Framework for Automated Network-on-Chip Design

102   0   0.0 ( 0 )
 Added by Wolfgang Fink
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

With the advent of multi-core processors, network-on-chip design has been key in addressing network performances, such as bandwidth, power consumption, and communication delays when dealing with on-chip communication between the increasing number of processor cores. As the numbers of cores increase, network design becomes more complex. Therefore, there is a critical need in soliciting computer aid in determining network configurations that afford optimal performance given resources and design constraints. We propose a Pareto-optimization framework that explores the space of possible network configurations to determine optimal network latencies, power consumption, and the corresponding link allocations. For a given number of routers, average network latency and power consumption as example performance objectives can be displayed in form of Pareto-optimal fronts, thus not only offering a design tool, but also enabling trade-off studies.



rate research

Read More

215 - Xinru Wu , Chaoran Huang , Ke Xu 2017
Optical interconnect is a potential solution to attain the large bandwidth on-chip communications needed in high performance computers in a low power and low cost manner. Mode-division multiplexing (MDM) is an emerging technology that scales the capacity of a single wavelength carrier by the number of modes in a multimode waveguide, and is attractive as a cost-effective means for high bandwidth density on-chip communications. Advanced modulation formats with high spectral efficiency in MDM networks can further improve the data rates of the optical link. Here, we demonstrate an intra-chip MDM communications link employing advanced modulation formats with two waveguide modes. We demonstrate a compact single wavelength carrier link that is expected to support 2x100 Gb/s mode multiplexed capacity. The network comprised integrated microring modulators at the transmitter, mode multiplexers, multimode waveguide interconnect, mode demultiplexers and integrated germanium on silicon photodetectors. Each of the mode channels achieves 100 Gb/s line rate with 84 Gb/s net payload data rate at 7% overhead for hard-decision forward error correction (HD-FEC) in the OFDM/16-QAM signal transmission.
84 - Hui Li 2015
Optical Network-on-Chip (ONoC) is an emerging technology considered as one of the key solutions for future generation on-chip interconnects. However, silicon photonic devices in ONoC are highly sensitive to temperature variation, which leads to a lower efficiency of Vertical-Cavity Surface-Emitting Lasers (VCSELs), a resonant wavelength shift of Microring Resonators (MR), and results in a lower Signal to Noise Ratio (SNR). In this paper, we propose a methodology enabling thermal-aware design for optical interconnects relying on CMOS-compatible VCSEL. Thermal simulations allow designing ONoC interfaces with low gradient temperature and analytical models allow evaluating the SNR.
Network Function Virtualization (NFV) can cost-efficiently provide network services by running different virtual network functions (VNFs) at different virtual machines (VMs) in a correct order. This can result in strong couplings between the decisions of the VMs on the placement and operations of VNFs. This paper presents a new fully decentralized online approach for optimal placement and operations of VNFs. Building on a new stochastic dual gradient method, our approach decouples the real-time decisions of VMs, asymptotically minimizes the time-average cost of NFV, and stabilizes the backlogs of network services with a cost-backlog tradeoff of $[epsilon,1/epsilon]$, for any $epsilon > 0$. Our approach can be relaxed into multiple timescales to have VNFs (re)placed at a larger timescale and hence alleviate service interruptions. While proved to preserve the asymptotic optimality, the larger timescale can slow down the optimal placement of VNFs. A learn-and-adapt strategy is further designed to speed the placement up with an improved tradeoff $[epsilon,log^2(epsilon)/{sqrt{epsilon}}]$. Numerical results show that the proposed method is able to reduce the time-average cost of NFV by 30% and reduce the queue length (or delay) by 83%, as compared to existing benchmarks.
98 - Giovanni Iacca 2018
Wireless Sensor Networks (WSNs) is an emerging technology in several application domains, ranging from urban surveillance to environmental and structural monitoring. Computational Intelligence (CI) techniques are particularly suitable for enhancing these systems. However, when embedding CI into wireless sensors, severe hardware limitations must be taken into account. In this paper we investigate the possibility to perform an online, distributed optimization process within a WSN. Such a system might be used, for example, to implement advanced network features like distributed modelling, self-optimizing protocols, and anomaly detection, to name a few. The proposed approach, called DOWSN (Distributed Optimization for WSN) is an island-model infrastructure in which each node executes a simple, computationally cheap (both in terms of CPU and memory) optimization algorithm, and shares promising solutions with its neighbors. We perform extensive tests of different DOWSN configurations on a benchmark made up of continuous optimization problems; we analyze the influence of the network parameters (number of nodes, inter-node communication period and probability of accepting incoming solutions) on the optimization performance. Finally, we profile energy and memory consumption of DOWSN to show the efficient usage of the limited hardware resources available on the sensor nodes.
Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at the two terminals of memristors make them promising candidates to perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merit of memristor crossbars with the advantages of an operator splitting method, alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The capability of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based PI method can further be applied to principal component analysis (PCA). The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence (AI).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا