No Arabic abstract
The power of networks manifests itself in a highly non-linear amplification of a number of effects, and their weakness - in propagation of cascading failures. The potential systemic risk effects can be either exacerbated or mitigated, depending on the resilience characteristics of the network. The goals of this paper are to study some characteristics of network amplification and resilience. We simulate random Erdos-Renyi networks and measure amplification by varying node capacity, transaction volume, and expected failure rates. We discover that network throughput scales almost quadratically with respect to the node capacity and that the effects of excessive network load and random and irreparable node faults are equivalent and almost perfectly anticorrelated. This knowledge can be used by capacity planners to determine optimal reliability requirements that maximize the optimal operational regions.
Abundant examples of complex transaction-oriented networks (TONs) can be found in a variety of disciplines, including information and communication technology, finances, commodity trading, and real estate. A transaction in a TON is executed as a sequence of subtransactions associated with the network nodes, and is committed if every subtransaction is committed. A subtransaction incurs a two-fold overhead on the host node: the fixed transient operational cost and the cost of long-term management (e.g. archiving and support) that potentially grows exponentially with the transaction length. If the overall cost exceeds the node capacity, the node fails and all subtransaction incident to the node, and their parent distributed transactions, are aborted. A TON resilience can be measured in terms of either external workloads or intrinsic node fault rates that cause the TON to partially or fully choke. We demonstrate that under certain conditions, these two measures are equivalent. We further show that the exponential growth of the long-term management costs can be mitigated by adjusting the effective operational cost: in other words, that the future maintenance costs could be absorbed into the transient operational costs.
The last decade has experienced a vast interest in Blockchain-based cryptocurrencies with a specific focus on the applications of this technology. However, slow confirmation times of transactions and unforeseeable high fees hamper their wide adoption for micro-payments. The idea of establishing payment channel networks is one of the many proposed solutions to address this scalability issue where nodes, by utilizing smart contracting, establish payment channels between each other and perform off-chain transactions. However, due to the way these channels are created, both sides have a certain one-way capacity for making transactions. Consequently, if one sides exceeds this one-way capacity, the channel becomes useless in that particular direction, which causes failures of payments and eventually creates an imbalance in the overall network. To keep the payment channel network sustainable, in this paper, we aim to increase the overall success rate of payments by effectively exploiting the fact that end-users are usually connected to the network at multiple points (i.e., gateways) any of which can be used to initiate the payment. We propose an efficient method for selection of the gateway for a user by considering the gateways inbound and outbound payment traffic ratio. We then augment this proposed method with split payment capability to further increase success rate especially for large transactions. The evaluation of the proposed method shows that compared to greedy and maxflow-based approaches, we can achieve much higher success rates, which are further improved with split payments.
Ad-hoc networks, a promising trend in wireless technology, fail to work properly in a global setting. In most cases, self-organization and cost-free local communication cannot compensate the need for being connected, gathering urgent information just-in-time. Equipping mobile devices additionally with GSM or UMTS adapters in order to communicate with arbitrary remote devices or even a fixed network infrastructure provides an opportunity. Devices that operate as intermediate nodes between the ad-hoc network and a reliable backbone network are potential injection points. They allow disseminating received information within the local neighborhood. The effectiveness of different devices to serve as injection point differs substantially. For practical reasons the determination of injection points should be done locally, within the ad-hoc network partitions. We analyze different localized algorithms using at most 2-hop neighboring information. Results show that devices selected this way spread information more efficiently through the ad-hoc network. Our results can also be applied in order to support the election process for clusterheads in the field of clustering mechanisms.
In the load balancing problem, each node in a network is assigned a load, and the goal is to equally distribute the loads among the nodes, by preforming local load exchanges. While load balancing was extensively studied in static networks, only recently a load balancing algorithm for dynamic networks with a bounded convergence time was presented. In this paper, we further study the time complexity of load balancing in the context of dynamic networks. First, we show that randomness is not necessary, and present a deterministic algorithm which slightly improves the running time of the previous algorithm, at the price of not being matching-based. Then, we consider integral loads, i.e., loads that cannot be split indefinitely, and prove that no matching-based algorithm can have a bounded convergence time for this case. To circumvent both this impossibility result, and a known one for the non-integral case, we apply the method of smoothed analysis, where random perturbations are made over the worst-case choices of network topologies. We show both impossibility results do not hold under this kind of analysis, suggesting that load-balancing in real world systems might be faster than the lower bounds suggest.
Vehicular cloud computing has emerged as a promising solution to fulfill users demands on processing computation-intensive applications in modern driving environments. Such applications are commonly represented by graphs consisting of components and edges. However, encouraging vehicles to share resources poses significant challenges owing to users selfishness. In this paper, an auction-based graph job allocation problem is studied in vehicular cloud-assisted networks considering resource reutilization. Our goal is to map each buyer (component) to a feasible seller (virtual machine) while maximizing the buyers utility-of-service, which concerns the execution time and commission cost. First, we formulate the auction-based graph job allocation as an integer programming (IP) problem. Then, a Vickrey-Clarke-Groves based payment rule is proposed which satisfies the desired economical properties, truthfulness and individual rationality. We face two challenges: 1) the above-mentioned IP problem is NP-hard; 2) one constraint associated with the IP problem poses addressing the subgraph isomorphism problem. Thus, obtaining the optimal solution is practically infeasible in large-scale networks. Motivated by which, we develop a structure-preserved matching algorithm by maximizing the utility-of-service-gain, and the corresponding payment rule which offers economical properties and low computation complexity. Extensive simulations demonstrate that the proposed algorithm outperforms the benchmark methods considering various problem sizes.