Do you want to publish a course? Click here

Separation of timescales in a two-layered network

69   0   0.0 ( 0 )
 Added by Maria Vlasiou
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

We investigate a computer network consisting of two layers occurring in, for example, application servers. The first layer incorporates the arrival of jobs at a network of multi-server nodes, which we model as a many-server Jackson network. At the second layer, active servers at these nodes act now as customers who are served by a common CPU. Our main result shows a separation of time scales in heavy traffic: the main source of randomness occurs at the (aggregate) CPU layer; the interactions between different types of nodes at the other layer is shown to converge to a fixed point at a faster time scale; this also yields a state-space collapse property. Apart from these fundamental insights, we also obtain an explicit approximation for the joint law of the number of jobs in the system, which is provably accurate for heavily loaded systems and performs numerically well for moderately loaded systems. The obtained results for the model under consideration can be applied to thread-pool dimensioning in application servers, while the technique seems applicable to other layered systems too.



rate research

Read More

Motivated by a web-server model, we present a queueing network consisting of two layers. The first layer incorporates the arrival of customers at a network of two single-server nodes. We assume that the inter-arrival and the service times have general distributions. Customers are served according to their arrival order at each node and after finishing their service they can re-enter at nodes several times (as new customers) for new services. At the second layer, active servers act as jobs which are served by a single server working at speed one in a Processor-Sharing fashion. We further assume that the degree of resource sharing is limited by choice, leading to a Limited Processor-Sharing discipline. Our main result is a diffusion approximation for the process describing the number of customers in the system. Assuming a single bottleneck node and studying the system as it approaches heavy traffic, we prove a state-space collapse property. The key to derive this property is to study the model at the second layer and to prove a diffusion limit theorem, which yields an explicit approximation for the customers in the system.
This study is concerned with the dynamical behaviors of epidemic spreading over a two-layered interconnected network. Three models in different levels are proposed to describe cooperative spreading processes over the interconnected network, wherein the disease in one network can spread to the other. Theoretical analysis is provided for each model to reveal that the global epidemic threshold in the interconnected network is not larger than the epidemic thresholds for the two isolated layered networks. In particular, in an interconnected homogenous network, detailed theoretical analysis is presented, which allows quick and accurate calculations of the global epidemic threshold. Moreover, in an interconnected heterogeneous network with inter-layer correlation between node degrees, it is found that the inter-layer correlation coefficient has little impact on the epidemic threshold, but has significant impact on the total prevalence. Simulations further verify the analytical results, showing that cooperative epidemic processes promote the spreading of diseases.
We study a token-based central queue with multiple customer types. Customers of each type arrive according to a Poisson process and have an associated set of compatible tokens. Customers may only receive service when they have claimed a compatible token. If upon arrival, more than one compatible token is available, an assignment rule determines which token will be claimed. The service rate obtained by a customer is state-dependent, i.e., it depends on the set of claimed tokens and on the number of customers in the system. Our first main result shows that, provided the assignment rule and the service rates satisfy certain conditions, the steady-state distribution has a product form. We show that our model subsumes known families of models that have product-form steady-state distributions including the order-independent queue of Krzesinski (2011) and the model of Visschers et al. (2012). Our second main contribution involves the derivation of expressions for relevant performance measures such as the sojourn time and the number of customers present in the system. We apply our framework to relevant models, including an M/M/K queue with heterogeneous service rates, the MSCCC queue, multi-server models with redundancy and matching models. For some of these models, we present expressions for performance measures that have not been derived before.
We consider a distribution grid used to charge electric vehicles such that voltage drops stay bounded. We model this as a class of resource-sharing networks, known as bandwidth-sharing networks in the communication network literature. We focus on resource-sharing networks that are driven by a class of greedy control rules that can be implemented in a decentralized fashion. For a large number of such control rules, we can characterize the performance of the system by a fluid approximation. This leads to a set of dynamic equations that take into account the stochastic behavior of EVs. We show that the invariant point of these equations is unique and can be computed by solving a specific ACOPF problem, which admits an exact convex relaxation. We illustrate our findings with a case study using the SCE 47-bus network and several special cases that allow for explicit computations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا