Do you want to publish a course? Click here

Optimizing Floating Locations in Hard Disk Drive by Solving Max-min Optimization

270   0   0.0 ( 0 )
 Added by Chifu Yang Dr.
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Floating operation is very critical in power management in hard disk drive (HDD), during which no control command is applied to the read/write head but a fixed current to counteract actuator flex bias. External disturbance induced drift of head may result in interference of head and bump on the disk during drifting, leading to consequent scratches and head degradation, which is a severe reliability concern in HDD. This paper proposes a unique systematic methodology to minimize the chances of hitting bump on the disk during drive floating. Essentially, it provides a heuristic solution to a class of max-min optimization problem which achieves desirable trade-off between optimality and computation complexity. Multivariable nonlinear optimization problem of this sort is reduced from NP-hard to an arithmetic problem. Also, worst-case is derived for arbitrary bump locations.



rate research

Read More

Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games. In this class, players control the inputs of smooth but possibly non-linear functions whose outputs are being applied as inputs to a convex-concave game. Unlike general zero-sum games, these games have a well-defined notion of solution; outcomes that implement the von-Neumann equilibrium of the hidden convex-concave game. We prove that if the hidden game is strictly convex-concave then vanilla GDA converges not merely to local Nash, but typically to the von-Neumann solution. If the game lacks strict convexity properties, GDA may fail to converge to any equilibrium, however, by applying standard regularization techniques we can prove convergence to a von-Neumann solution of a slightly perturbed zero-sum game. Our convergence guarantees are non-local, which as far as we know is a first-of-its-kind type of result in non-convex non-concave games. Finally, we discuss connections of our framework with generative adversarial networks.
We consider the design of a fair sensor schedule for a number of sensors monitoring different linear time-invariant processes. The largest average remote estimation error among all processes is to be minimized. We first consider a general setup for the max-min fair allocation problem. By reformulating the problem as its equivalent form, we transform the fair resource allocation problem into a zero-sum game between a judge and a resource allocator. We propose an equilibrium seeking procedure and show that there exists a unique Nash equilibrium in pure strategy for this game. We then apply the result to the sensor scheduling problem and show that the max-min fair sensor scheduling policy can be achieved.
Adversarial formulations such as generative adversarial networks (GANs) have rekindled interest in two-player min-max games. A central obstacle in the optimization of such games is the rotational dynamics that hinder their convergence. Existing methods typically employ intuitive, carefully hand-designed mechanisms for controlling such rotations. In this paper, we take a novel approach to address this issue by casting min-max optimization as a physical system. We leverage tools from physics to introduce LEAD (Least-Action Dynamics), a second-order optimizer for min-max games. Next, using Lyapunov stability theory and spectral analysis, we study LEADs convergence properties in continuous and discrete-time settings for bilinear games to demonstrate linear convergence to the Nash equilibrium. Finally, we empirically evaluate our method on synthetic setups and CIFAR-10 image generation to demonstrate improvements over baseline methods.
We provide a first-order oracle complexity lower bound for finding stationary points of min-max optimization problems where the objective function is smooth, nonconvex in the minimization variable, and strongly concave in the maximization variable. We establish a lower bound of $Omegaleft(sqrt{kappa}epsilon^{-2}right)$ for deterministic oracles, where $epsilon$ defines the level of approximate stationarity and $kappa$ is the condition number. Our analysis shows that the upper bound achieved in (Lin et al., 2020b) is optimal in the $epsilon$ and $kappa$ dependence up to logarithmic factors. For stochastic oracles, we provide a lower bound of $Omegaleft(sqrt{kappa}epsilon^{-2} + kappa^{1/3}epsilon^{-4}right)$. It suggests that there is a significant gap between the upper bound $mathcal{O}(kappa^3 epsilon^{-4})$ in (Lin et al., 2020a) and our lower bound in the condition number dependence.
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distribution shifts that arise from phenomena such as selection bias or nonstationarity. Existing algorithms for solving Wasserstein DRSL -- one of the most popular DRSL frameworks based around robustness to perturbations in the Wasserstein distance -- involve solving complex subproblems or fail to make use of stochastic gradients, limiting their use in large-scale machine learning problems. We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable stochastic extra-gradient algorithms which provably achieve faster convergence rates than existing approaches. We demonstrate their effectiveness on synthetic and real data when compared to existing DRSL approaches. Key to our results is the use of variance reduction and random reshuffling to accelerate stochastic min-max optimization, the analysis of which may be of independent interest.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا