We consider the design of a fair sensor schedule for a number of sensors monitoring different linear time-invariant processes. The largest average remote estimation error among all processes is to be minimized. We first consider a general setup for the max-min fair allocation problem. By reformulating the problem as its equivalent form, we transform the fair resource allocation problem into a zero-sum game between a judge and a resource allocator. We propose an equilibrium seeking procedure and show that there exists a unique Nash equilibrium in pure strategy for this game. We then apply the result to the sensor scheduling problem and show that the max-min fair sensor scheduling policy can be achieved.
The restricted max-min fair allocation problem seeks an allocation of resources to players that maximizes the minimum total value obtained by any player. It is NP-hard to approximate the problem to a ratio less than 2. Comparing the current best algorithm for estimating the optimal value with the current best for constructing an allocation, there is quite a gap between the ratios that can be achieved in polynomial time: roughly 4 for estimation and roughly $6 + 2sqrt{10}$ for construction. We propose an algorithm that constructs an allocation with value within a factor of $6 + delta$ from the optimum for any constant $delta > 0$. The running time is polynomial in the input size for any constant $delta$ chosen.
We consider pricing and selection with fading channels in a Stackelberg game framework. A channel server decides the channel prices and a client chooses which channel to use based on the remote estimation quality. We prove the existence of an optimal deterministic and Markovian policy for the client, and show that the optimal policies of both the server and the client have threshold structures when the time horizon is finite. Value iteration algorithm is applied to obtain the optimal solutions for both the server and client, and numerical simulations and examples are given to demonstrate the developed result.
Recent advances in the blockchain research have been made in two important directions. One is refined resilience analysis utilizing game theory to study the consequences of selfish behaviors of users (miners), and the other is the extension from a linear (chain) structure to a non-linear (graphical) structure for performance improvements, such as IOTA and Graphcoin. The first question that comes to peoples minds is what improvements that a blockchain system would see by leveraging these new advances. In this paper, we consider three major metrics for a blockchain system: full verification, scalability, and finality-duration. We { establish a formal framework and} prove that no blockchain system can achieve full verification, high scalability, and low finality-duration simultaneously. We observe that classical blockchain systems like Bitcoin achieves full verification and low finality-duration, Harmony and Ethereum 2.0 achieve low finality-duration and high scalability. As a complementary, we design a non-linear blockchain system that achieves full verification and scalability. We also establish, for the first time, the trade-off between scalability and finality-duration.
Floating operation is very critical in power management in hard disk drive (HDD), during which no control command is applied to the read/write head but a fixed current to counteract actuator flex bias. External disturbance induced drift of head may result in interference of head and bump on the disk during drifting, leading to consequent scratches and head degradation, which is a severe reliability concern in HDD. This paper proposes a unique systematic methodology to minimize the chances of hitting bump on the disk during drive floating. Essentially, it provides a heuristic solution to a class of max-min optimization problem which achieves desirable trade-off between optimality and computation complexity. Multivariable nonlinear optimization problem of this sort is reduced from NP-hard to an arithmetic problem. Also, worst-case is derived for arbitrary bump locations.
In small-cell wireless networks where users are connected to multiple base stations (BSs), it is often advantageous to switch off dynamically a subset of BSs to minimize energy costs. We consider two types of energy cost: (i) the cost of maintaining a BS in the active state, and (ii) the cost of switching a BS from the active state to inactive state. The problem is to operate the network at the lowest possible energy cost (sum of activation and switching costs) subject to queue stability. In this setting, the traditional approach -- a Max-Weight algorithm along with a Lyapunov-based stability argument -- does not suffice to show queue stability, essentially due to the temporal co-evolution between channel scheduling and the BS activation decisions induced by the switching cost. Instead, we develop a learning and BS activation algorithm with slow temporal dynamics, and a Max-Weight based channel scheduler that has fast temporal dynamics. We show using convergence of time-inhomogeneous Markov chains, that the co-evolving dynamics of learning, BS activation and queue lengths lead to near optimal average energy costs along with queue stability.