Do you want to publish a course? Click here

A New Theoretic Foundation for Cross-Layer Optimization

274   0   0.0 ( 0 )
 Added by Fangwen Fu
 Publication date 2007
and research's language is English




Ask ChatGPT about the research

Cross-layer optimization solutions have been proposed in recent years to improve the performance of network users operating in a time-varying, error-prone wireless environment. However, these solutions often rely on ad-hoc optimization approaches, which ignore the different environmental dynamics experienced at various layers by a user and violate the layered network architecture of the protocol stack by requiring layers to provide access to their internal protocol parameters to other layers. This paper presents a new theoretic foundation for cross-layer optimization, which allows each layer to make autonomous decisions individually, while maximizing the utility of the wireless user by optimally determining what information needs to be exchanged among layers. Hence, this cross-layer framework does not change the current layered architecture. Specifically, because the wireless user interacts with the environment at various layers of the protocol stack, the cross-layer optimization problem is formulated as a layered Markov decision process (MDP) in which each layer adapts its own protocol parameters and exchanges information (messages) with other layers in order to cooperatively maximize the performance of the wireless user. The message exchange mechanism for determining the optimal cross-layer transmission strategies has been designed for both off-line optimization and on-line dynamic adaptation. We also show that many existing cross-layer optimization algorithms can be formulated as simplified, sub-optima



rate research

Read More

Recently, utilizing renewable energy for wireless system has attracted extensive attention. However, due to the instable energy supply and the limited battery capacity, renewable energy cannot guarantee to provide the perpetual operation for wireless sensor networks (WSN). The coexistence of renewable energy and electricity grid is expected as a promising energy supply manner to remain function for a potentially infinite lifetime. In this paper, we propose a new system model suitable for WSN, taking into account multiple energy consumptions due to sensing, transmission and reception, heterogeneous energy supplies from renewable energy, electricity grid and mixed energy, and multidimension stochastic natures due to energy harvesting profile, electricity price and channel condition. A discrete-time stochastic cross-layer optimization problem is formulated to achieve the optimal trade-off between the time-average rate utility and electricity cost subject to the data and energy queuing stability constraints. The Lyapunov drift-plus-penalty with perturbation technique and block coordinate descent method is applied to obtain a fully distributed and low-complexity cross-layer algorithm only requiring knowledge of the instantaneous system state. The explicit trade-off between the optimization objective and queue backlog is theoretically proven. Finally, the extensive simulations verify the theoretic claims.
217 - Yuzhe Ma , Subhendu Roy , Jin Miao 2018
In spite of maturity to the modern electronic design automation (EDA) tools, optimized designs at architectural stage may become sub-optimal after going through physical design flow. Adder design has been such a long studied fundamental problem in VLSI industry yet designers cannot achieve optimal solutions by running EDA tools on the set of available prefix adder architectures. In this paper, we enhance a state-of-the-art prefix adder synthesis algorithm to obtain a much wider solution space in architectural domain. On top of that, a machine learning-based design space exploration methodology is applied to predict the Pareto frontier of the adders in physical domain, which is infeasible by exhaustively running EDA tools for innumerable architectural solutions. Considering the high cost of obtaining the true values for learning, an active learning algorithm is utilized to select the representative data during learning process, which uses less labeled data while achieving better quality of Pareto frontier. Experimental results demonstrate that our framework can achieve Pareto frontier of high quality over a wide design space, bridging the gap between architectural and physical designs.
In this paper, we propose a general cross-layer optimization framework in which we explicitly consider both the heterogeneous and dynamically changing characteristics of delay-sensitive applications and the underlying time-varying network conditions. We consider both the independently decodable data units (DUs, e.g. packets) and the interdependent DUs whose dependencies are captured by a directed acyclic graph (DAG). We first formulate the cross-layer design as a non-linear constrained optimization problem by assuming complete knowledge of the application characteristics and the underlying network conditions. The constrained cross-layer optimization is decomposed into several cross-layer optimization subproblems for each DU and two master problems. The proposed decomposition method determines the necessary message exchanges between layers for achieving the optimal cross-layer solution. However, the attributes (e.g. distortion impact, delay deadline etc) of future DUs as well as the network conditions are often unknown in the considered real-time applications. The impact of current cross-layer actions on the future DUs can be characterized by a state-value function in the Markov decision process (MDP) framework. Based on the dynamic programming solution to the MDP, we develop a low-complexity cross-layer optimization algorithm using online learning for each DU transmission. This online algorithm can be implemented in real-time in order to cope with unknown source characteristics, network dynamics and resource constraints. Our numerical results demonstrate the efficiency of the proposed online algorithm.
Nowadays Dynamic Adaptive Streaming over HTTP (DASH) is the most prevalent solution on the Internet for multimedia streaming and responsible for the majority of global traffic. DASH uses adaptive bit rate (ABR) algorithms, which select the video quality considering performance metrics such as throughput and playout buffer level. Pensieve is a system that allows to train ABR algorithms using reinforcement learning within a simulated network environment and is outperforming existing approaches in terms of achieved performance. In this paper, we demonstrate that the performance of the trained ABR algorithms depends on the implementation of the simulated environment used to train the neural network. We also show that the used congestion control algorithm impacts the algorithms performance due to cross-layer effects.
We consider a data aggregating wireless network where all nodes have data to send to a single destination node, the sink. We consider a linear placement of nodes with the sink at one end. The nodes communicate directly to the sink (single hop transmission) and we assume that the nodes are scheduled one at a time by a central scheduler (possibly the sink). The wireless nodes are power limited and our network objective (notion of fairness) is to maximize the minimum throughput of the nodes subject to the node power constraints. In this work, we consider network designs that permit adapting node transmission time, node transmission power and node placements, and study cross- layer strategies that seek to maximize the network throughput. Using simulations, we characterize the performance of the dif- ferent strategies and comment on their applicability for various network scenarios.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا