ترغب بنشر مسار تعليمي؟ اضغط هنا

Asynchronous Adaptation and Learning over Networks --- Part I: Modeling and Stability Analysis

119   0   0.0 ( 0 )
 نشر من قبل Xiaochuan Zhao
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We examine asynchronous networks that are subject to fairly general sources of uncertainties, such as changing topologies, random link failures, random data arrival times, and agents turning on and off randomly. Under this model, agents in the network may stop updating their solutions or may stop sending or receiving information in a random manner and without coordination with other agents. We establish in Part I conditions on the first and second-order moments of the relevant parameter distributions to ensure mean-square stable behavior. We derive in Part II expressions that reveal how the various parameters of the asynchronous behavior influence network performance. We compare in Part III the performance of asynchronous networks to the performance of both centralized solutions and synchronous networks. One notable conclusion is that the mean-square-error performance of asynchronous networks shows a degradation only of the order of $O( u)$, where $ u$ is a small step-size parameter, while the convergence rate remains largely unaltered. The results provide a solid justification for the remarkable resilience of cooperative networks in the face of random failures at multiple levels: agents, links, data arrivals, and topology.



قيم البحث

اقرأ أيضاً

In Part I cite{Zhao13TSPasync1}, we introduced a fairly general model for asynchronous events over adaptive networks including random topologies, random link failures, random data arrival times, and agents turning on and off randomly. We performed a stability analysis and established the notable fact that the network is still able to converge in the mean-square-error sense to the desired solution. Once stable behavior is guaranteed, it becomes important to evaluate how fast the iterates converge and how close they get to the optimal solution. This is a demanding task due to the various asynchronous events and due to the fact that agents influence each other. In this Part II, we carry out a detailed analysis of the mean-square-error performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We derive analytical expressions for the mean-square convergence rate and the steady-state mean-square-deviation. The expressions reveal how the various parameters of the asynchronous behavior influence network performance. In the process, we establish the interesting conclusion that even under the influence of asynchronous events, all agents in the adaptive network can still reach an $O( u^{1 + gamma_o})$ near-agreement with some $gamma_o > 0$ while approaching the desired solution within $O( u)$ accuracy, where $ u$ is proportional to the small step-size parameter for adaptation.
In Part II [3] we carried out a detailed mean-square-error analysis of the performance of asynchronous adaptation and learning over networks under a fairly general model for asynchronous events including random topologies, random link failures, rando m data arrival times, and agents turning on and off randomly. In this Part III, we compare the performance of synchronous and asynchronous networks. We also compare the performance of decentralized adaptation against centralized stochastic-gradient (batch) solutions. Two interesting conclusions stand out. First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations. Only the steady-state mean-square-deviation suffers a degradation in the order of $ u$, which represents the small step-size parameters used for adaptation. Second, the results show that the adaptive distributed network matches the performance of the centralized solution. These conclusions highlight another critical benefit of cooperation by networked agents: cooperation does not only enhance performance in comparison to stand-alone single-agent processing, but it also endows the network with remarkable resilience to various forms of random failure events and is able to deliver performance that is as powerful as batch solutions.
330 - Songyang Zhang , Qinwen Deng , 2021
Signal processing over single-layer graphs has become a mainstream tool owing to its power in revealing obscure underlying structures within data signals. For generally, many real-life datasets and systems are characterized by more complex interactio ns among distinct entities. Such complex interactions may represent multiple levels of interactions that are difficult to be modeled with a single layer graph and can instead be captured by multiple layers of graph connections. Such multilayer/multi-level data structure can be more naturally modeled and captured by a high-dimensional multi-layer network (MLN). This work generalizes traditional graph signal processing (GSP) over multilayer networks for analysis of such multilayer signal features and their interactions. We propose a tensor-based framework of this multilayer network signal processing (M-GSP) in this two-part series. Specially, Part I introduces the fundamentals of M-GSP and studies spectrum properties of MLN Fourier space. We further describe its connections to traditional digital signal processing and GSP. Part II focuses on several major tools within the M-GSP framework for signal processing and data analysis. We provide results to demonstrate the efficacy and benefits of applying multilayer networks and the M-GSP in practical scenarios.
Robustness of distributed routing policies is studied for dynamical flow networks, with respect to adversarial disturbances that reduce the link flow capacities. A dynamical flow network is modeled as a system of ordinary differential equations deriv ed from mass conservation laws on a directed acyclic graph with a single origin-destination pair and a constant inflow at the origin. Routing policies regulate the way the inflow at a non-destination node gets split among its outgoing links as a function of the current particle density, while the outflow of a link is modeled to depend on the current particle density on that link through a flow function. The dynamical flow network is called partially transferring if the total inflow at the destination node is asymptotically bounded away from zero, and its weak resilience is measured as the minimum sum of the link-wise magnitude of all disturbances that make it not partially transferring. The weak resilience of a dynamical flow network with arbitrary routing policy is shown to be upper-bounded by the networks min-cut capacity, independently of the initial flow conditions. Moreover, a class of distributed routing policies that rely exclusively on local information on the particle densities, and are locally responsive to that, is shown to yield such maximal weak resilience. These results imply that locality constraints on the information available to the routing policies do not cause loss of weak resilience. Some fundamental properties of dynamical flow networks driven by locally responsive distributed policies are analyzed in detail, including global convergence to a unique limit flow.
132 - Vidit Saxena , Hugo Tullberg , 2020
Wireless links adapt the data transmission parameters to the dynamic channel state -- this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, rein forcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا