ترغب بنشر مسار تعليمي؟ اضغط هنا

We consider the problem of dividing limited resources to individuals arriving over $T$ rounds. Each round has a random number of individuals arrive, and individuals can be characterized by their type (i.e. preferences over the different resources). A standard notion of `fairness in this setting is that an allocation simultaneously satisfy envy-freeness and efficiency. The former is an individual guarantee, requiring that each agent prefers her own allocation over the allocation of any other; in contrast, efficiency is a global property, requiring that the allocations clear the available resources. For divisible resources, when the number of individuals of each type are known upfront, the above desiderata are simultaneously achievable for a large class of utility functions. However, in an online setting when the number of individuals of each type are only revealed round by round, no policy can guarantee these desiderata simultaneously, and hence the best one can do is to try and allocate so as to approximately satisfy the two properties. We show that in the online setting, the two desired properties (envy-freeness and efficiency) are in direct contention, in that any algorithm achieving additive envy-freeness up to a factor of $L_T$ necessarily suffers an efficiency loss of at least $1 / L_T$. We complement this uncertainty principle with a simple algorithm, HopeGuardrail, which allocates resources based on an adaptive threshold policy. We show that our algorithm is able to achieve any fairness-efficiency point on this frontier, and moreover, in simulation results, provides allocations close to the optimal fair solution in hindsight. This motivates its use in practical applications, as the algorithm is able to adapt to any desired fairness efficiency trade-off.
We study real-time routing policies in smart transit systems, where the platform has a combination of cars and high-capacity vehicles (e.g., buses or shuttles) and seeks to serve a set of incoming trip requests. The platform can use its fleet of cars as a feeder to connect passengers to its high-capacity fleet, which operates on fixed routes. Our goal is to find the optimal set of (bus) routes and corresponding frequencies to maximize the social welfare of the system in a given time window. This generalizes the Line Planning Problem, a widely studied topic in the transportation literature, for which existing solutions are either heuristic (with no performance guarantees), or require extensive computation time (and hence are impractical for real-time use). To this end, we develop a $1-frac{1}{e}-varepsilon$ approximation algorithm for the Real-Time Line Planning Problem, using ideas from randomized rounding and the Generalized Assignment Problem. Our guarantee holds under two assumptions: $(i)$ no inter-bus transfers and $(ii)$ access to a pre-specified set of feasible bus lines. We moreover show that these two assumptions are crucial by proving that, if either assumption is relaxed, the Real-Time Line Planning Problem does not admit any constant-factor approximation. Finally, we demonstrate the practicality of our algorithm via numerical experiments on real-world and synthetic datasets, in which we show that, given a fixed time budget, our algorithm outperforms Integer Linear Programming-based exact methods.
The performance of multimodal mobility systems relies on the seamless integration of conventional mass transit services and the advent of Mobility-on-Demand (MoD) services. Prior work is limited to individually improving various transport networks op erations or linking a new mode to an existing system. In this work, we attempt to solve transit network design and pricing problems of multimodal mobility systems en masse. An operator (public transit agency or private transit operator) determines the frequency settings of the mass transit system, flows of the MoD service, and prices for each trip to optimize the overall welfare. A primal-dual approach, inspired by the market design literature, yields a compact mixed integer linear programming (MILP) formulation. However, a key computational challenge remains in allocating an exponential number of hybrid modes accessible to travelers. We provide a tractable solution approach through a decomposition scheme and approximation algorithm that accelerates the computation and enables optimization of large-scale problem instances. Using a case study in Nashville, Tennessee, we demonstrate the value of the proposed model. We also show that our algorithm reduces the average runtime by 60% compared to advanced MILP solvers. This result seeks to establish a generic and simple-to-implement way of revamping and redesigning regional mobility systems in order to meet the increase in travel demand and integrate traditional fixed-line mass transit systems with new demand-responsive services.
With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of ex plainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, that explains the cause of an unexpected failure during an agents plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.
We consider the problem of dividing limited resources between a set of agents arriving sequentially with unknown (stochastic) utilities. Our goal is to find a fair allocation - one that is simultaneously Pareto-efficient and envy-free. When all utili ties are known upfront, the above desiderata are simultaneously achievable (and efficiently computable) for a large class of utility functions. In a sequential setting, however, no policy can guarantee these desiderata simultaneously for all possible utility realizations. A natural online fair allocation objective is to minimize the deviation of each agents final allocation from their fair allocation in hindsight. This translates into simultaneous guarantees for both Pareto-efficiency and envy-freeness. However, the resulting dynamic program has state-space which is exponential in the number of agents. We propose a simple policy, HopeOnline, that instead aims to `match the ex-post fair allocation vector using the current available resources and `predicted histogram of future utilities. We demonstrate the effectiveness of our policy compared to other heurstics on a dataset inspired by mobile food-bank allocations.
With the growing capabilities of intelligent systems, the integration of artificial intelligence (AI) and robots in everyday life is increasing. However, when interacting in such complex human environments, the failure of intelligent systems, such as robots, can be inevitable, requiring recovery assistance from users. In this work, we develop automated, natural language explanations for failures encountered during an AI agents plan execution. These explanations are developed with a focus of helping non-expert users understand different point of failures to better provide recovery assistance. Specifically, we introduce a context-based information type for explanations that can both help non-expert users understand the underlying cause of a system failure, and select proper failure recoveries. Additionally, we extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations. By doing so, we are able develop a model that can generalize context-based explanations over both different failure types and failure scenarios.
We study the limits of an information intermediary in Bayesian auctions. Formally, we consider the standard single-item auction, with a revenue-maximizing seller and $n$ buyers with independent private values; in addition, we now have an intermediary who knows the buyers true values, and can map these to a public signal so as to try to increase buyer surplus. This model was proposed by Bergemann et al., who present a signaling scheme for the single-buyer setting that raises the optimal consumer surplus, by guaranteeing the item is always sold while ensuring the seller gets the same revenue as without signaling. Our work aims to understand how this result ports to the setting with multiple buyers. Our first result is an impossibility: We show that such a signaling scheme need not exist even for $n=2$ buyers with $2$-point valuation distributions. Indeed, no signaling scheme can always allocate the item to the highest-valued buyer while preserving any non-trivial fraction of the original consumer surplus; further, no signaling scheme can achieve consumer surplus better than a factor of $frac{1}{2}$ compared to the maximum achievable. These results are existential (and not computational) impossibilities, and thus provide a sharp separation between the single and multi-buyer settings. On the positive side, for discrete valuation distributions, we develop signaling schemes with good approximation guarantees for the consumer surplus compared to the maximum achievable, in settings where either the number of agents, or the support size of valuations, is small. Formally, for i.i.d. buyers, we present an $O(min(log n, K))$-approximation where $K$ is the support size of the valuations. Moreover, for general distributions, we present an $O(min(n log n, K^2))$-approximation. Our signaling schemes are conceptually simple and computable in polynomial (in $n$ and $K$) time.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a s equence of $T$ periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial $O(N)$, unless it is given additional information about the agents values. Then, in line with the emerging area of algorithms with predictions, we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the $T$ periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of $O(log N)$ and $O(log T)$ if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of $O(log^{1-epsilon} N)$ or $O(log^{1-epsilon} T)$ for any constant $epsilon>0$.
Robot task execution when situated in real-world environments is fragile. As such, robot architectures must rely on robust error recovery, adding non-trivial complexity to highly-complex robot systems. To handle this complexity in development, we int roduce Recovery-Driven Development (RDD), an iterative task scripting process that facilitates rapid task and recovery development by leveraging hierarchical specification, separation of nominal task and recovery development, and situated testing. We validate our approach with our challenge-winning mobile manipulator software architecture developed using RDD for the FetchIt! Challenge at the IEEE 2019 International Conference on Robotics and Automation. We attribute the success of our system to the level of robustness achieved using RDD, and conclude with lessons learned for developing such systems.
We consider the problem of selling perishable items to a stream of buyers in order to maximize social welfare. A seller starts with a set of identical items, and each arriving buyer wants any one item, and has a valuation drawn i.i.d. from a known di stribution. Each item, however, disappears after an a priori unknown amount of time that we term the horizon for that item. The seller knows the (possibly different) distribution of the horizon for each item, but not its realization till the item actually disappears. As with the classic prophet inequalities, the goal is to design an online pricing scheme that competes with the prophet that knows the horizon and extracts full social surplus (or welfare). Our main results are for the setting where items have independent horizon distributions satisfying the monotone-hazard-rate (MHR) condition. Here, for any number of items, we achieve a constant-competitive bound via a conceptually simple policy that balances the rate at which buyers are accepted with the rate at which items are removed from the system. We implement this policy via a novel technique of matching via probabilistically simulating departures of the items at future times. Moreover, for a single item and MHR horizon distribution with mean $mu$, we show a tight result: There is a fixed pricing scheme that has competitive ratio at most $2 - 1/mu$, and this is the best achievable in this class. We further show that our results are best possible. First, we show that the competitive ratio is unbounded without the MHR assumption even for one item. Further, even when the horizon distributions are i.i.d. MHR and the number of items becomes large, the competitive ratio of any policy is lower bounded by a constant greater than $1$, which is in sharp contrast to the setting with identical deterministic horizons.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا