Do you want to publish a course? Click here

Asynchronous Adaptation and Learning over Networks - Part II: Performance Analysis

151   0   0.0 ( 0 )
 Added by Xiaochuan Zhao
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

In Part I cite{Zhao13TSPasync1}, we introduced a fairly general model for asynchronous events over adaptive networks including random topologies, random link failures, random data arrival times, and agents turning on and off randomly. We performed a stability analysis and established the notable fact that the network is still able to converge in the mean-square-error sense to the desired solution. Once stable behavior is guaranteed, it becomes important to evaluate how fast the iterates converge and how close they get to the optimal solution. This is a demanding task due to the various asynchronous events and due to the fact that agents influence each other. In this Part II, we carry out a detailed analysis of the mean-square-error performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We derive analytical expressions for the mean-square convergence rate and the steady-state mean-square-deviation. The expressions reveal how the various parameters of the asynchronous behavior influence network performance. In the process, we establish the interesting conclusion that even under the influence of asynchronous events, all agents in the adaptive network can still reach an $O( u^{1 + gamma_o})$ near-agreement with some $gamma_o > 0$ while approaching the desired solution within $O( u)$ accuracy, where $ u$ is proportional to the small step-size parameter for adaptation.

rate research

Read More

In Part II [3] we carried out a detailed mean-square-error analysis of the performance of asynchronous adaptation and learning over networks under a fairly general model for asynchronous events including random topologies, random link failures, random data arrival times, and agents turning on and off randomly. In this Part III, we compare the performance of synchronous and asynchronous networks. We also compare the performance of decentralized adaptation against centralized stochastic-gradient (batch) solutions. Two interesting conclusions stand out. First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations. Only the steady-state mean-square-deviation suffers a degradation in the order of $ u$, which represents the small step-size parameters used for adaptation. Second, the results show that the adaptive distributed network matches the performance of the centralized solution. These conclusions highlight another critical benefit of cooperation by networked agents: cooperation does not only enhance performance in comparison to stand-alone single-agent processing, but it also endows the network with remarkable resilience to various forms of random failure events and is able to deliver performance that is as powerful as batch solutions.
In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We examine asynchronous networks that are subject to fairly general sources of uncertainties, such as changing topologies, random link failures, random data arrival times, and agents turning on and off randomly. Under this model, agents in the network may stop updating their solutions or may stop sending or receiving information in a random manner and without coordination with other agents. We establish in Part I conditions on the first and second-order moments of the relevant parameter distributions to ensure mean-square stable behavior. We derive in Part II expressions that reveal how the various parameters of the asynchronous behavior influence network performance. We compare in Part III the performance of asynchronous networks to the performance of both centralized solutions and synchronous networks. One notable conclusion is that the mean-square-error performance of asynchronous networks shows a degradation only of the order of $O( u)$, where $ u$ is a small step-size parameter, while the convergence rate remains largely unaltered. The results provide a solid justification for the remarkable resilience of cooperative networks in the face of random failures at multiple levels: agents, links, data arrivals, and topology.
Strong resilience properties of dynamical flow networks are analyzed for distributed routing policies. The latter are characterized by the property that the way the inflow at a non-destination node gets split among its outgoing links is allowed to depend only on local information about the current particle densities on the outgoing links. The strong resilience of the network is defined as the infimum sum of link-wise flow capacity reductions under which the network cannot maintain the asymptotic total inflow to the destination node to be equal to the inflow at the origin. A class of distributed routing policies that are locally responsive to local information is shown to yield the maximum possible strong resilience under such local information constraints for an acyclic dynamical flow network with a single origin-destination pair. The maximal strong resilience achievable is shown to be equal to the minimum node residual capacity of the network. The latter depends on the limit flow of the unperturbed network and is defined as the minimum, among all the non-destination nodes, of the sum, over all the links outgoing from the node, of the differences between the maximum flow capacity and the limit flow of the unperturbed network. We propose a simple convex optimization problem to solve for equilibrium limit flows of the unperturbed network that minimize average delay subject to strong resilience guarantees, and discuss the use of tolls to induce such an equilibrium limit flow in transportation networks. Finally, we present illustrative simulations to discuss the connection between cascaded failures and the resilience properties of the network.
132 - Vidit Saxena , Hugo Tullberg , 2020
Wireless links adapt the data transmission parameters to the dynamic channel state -- this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.
In the context of event-triggered control, the timing of the triggering events carries information about the state of the system that can be used for stabilization. At each triggering event, not only can information be transmitted by the message content (data payload) but also by its timing. We demonstrate this in the context of stabilization of a laboratory-scale inverted pendulum around its equilibrium point over a digital communication channel with bounded unknown delay. Our event-triggering control strategy encodes timing information by transmitting in a state-dependent fashion and can achieve stabilization using a data payload transmission rate lower than what the data-rate theorem prescribes for classical periodic control policies that do not exploit timing information. Through experimental results, we show that as the delay in the communication channel increases, a higher data payload transmission rate is required to fulfill the proposed event-triggering policy requirements. This confirms the theoretical intuition that a larger delay brings a larger uncertainty about the value of the state at the controller, as less timing information is carried in the communication. In addition, our results also provide a novel encoding-decoding scheme to achieve input-to-state practically stability (ISpS) for nonlinear continuous-time systems under appropriate assumptions.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا