No Arabic abstract
We study a market of investments on networks, where each agent (vertex) can invest in any enterprise linked to him, and at the same time, raise capital for his firm own enterprise from other agents he is linked to. Failing to raise sufficient capital results with the firm defaulting, being unable to invest in others. Our main objective is to examine the role of collaterals in handling the strategic risk that can propagate to a systemic risk throughout the network in a cascade of defaults. We take a mechanism design approach and solve for the optimal scheme of collateral contracts that capital raisers offer their investors. These contracts aim at sustaining the efficient level of investment as a unique Nash equilibrium, while minimizing the total collateral. Our main results contrast the network environment with its non-network counterpart (where the sets of investors and capital raisers are disjoint). We show that for acyclic investment networks, the network environment does not necessitate any additional collaterals, and systemic risk can be fully handled by optimal bilateral collateral contracts between capital raisers and their investors. This is, unfortunately, not the case for cyclic investment networks. We show that bilateral contracting will not suffice to resolve systemic risk, and the market will need an external entity to design a global collateral scheme for all capital raisers. Furthermore, the minimum total collateral that will sustain the efficient level of investment as a unique equilibrium may be arbitrarily higher, even in simple cyclic investment networks, compared with its corresponding non-network environment. Additionally, we prove computational-complexity results, both for a single enterprise and for networks.
A patient seller aims to sell a good to an impatient buyer (i.e., one who discounts utility over time). The buyer will remain in the market for a period of time $T$, and her private value is drawn from a publicly known distribution. What is the revenue-optimal pricing-curve (sequence of (price, time) pairs) for the seller? Is randomization of help here? Is the revenue-optimal pricing-curve computable in polynomial time? We answer these questions in this paper. We give an efficient algorithm for computing the revenue-optimal pricing curve. We show that pricing curves, that post a price at each point of time and let the buyer pick her utility maximizing time to buy, are revenue-optimal among a much broader class of sequential lottery mechanisms: namely, mechanisms that allow the seller to post a menu of lotteries at each point of time cannot get any higher revenue than pricing curves. We also show that the even broader class of mechanisms that allow the menu of lotteries to be adaptively set, can earn strictly higher revenue than that of pricing curves, and the revenue gap can be as big as the support size of the buyers value distribution.
We consider the algorithmic question of choosing a subset of candidates of a given size $k$ from a set of $m$ candidates, with knowledge of voters ordinal rankings over all candidates. We consider the well-known and classic scoring rule for achieving diverse representation: the Chamberlin-Courant (CC) or $1$-Borda rule, where the score of a committee is the average over the voters, of the rank of the best candidate in the committee for that voter; and its generalization to the average of the top $s$ best candidates, called the $s$-Borda rule. Our first result is an improved analysis of the natural and well-studied greedy heuristic. We show that greedy achieves a $left(1 - frac{2}{k+1}right)$-approximation to the maximization (or satisfaction) version of CC rule, and a $left(1 - frac{2s}{k+1}right)$-approximation to the $s$-Borda score. Our result improves on the best known approximation algorithm for this problem. We show that these bounds are almost tight. For the dissatisfaction (or minimization) version of the problem, we show that the score of $frac{m+1}{k+1}$ can be viewed as an optimal benchmark for the CC rule, as it is essentially the best achievable score of any polynomial-time algorithm even when the optimal score is a polynomial factor smaller (under standard computational complexity assumptions). We show that another well-studied algorithm for this problem, called the Banzhaf rule, attains this benchmark. We finally show that for the $s$-Borda rule, when the optimal value is small, these algorithms can be improved by a factor of $tilde Omega(sqrt{s})$ via LP rounding. Our upper and lower bounds are a significant improvement over previous results, and taken together, not only enable us to perform a finer comparison of greedy algorithms for these problems, but also provide analytic justification for using such algorithms in practice.
We study online pricing algorithms for the Bayesian selection problem with production constraints and its generalization to the laminar matroid Bayesian online selection problem. Consider a firm producing (or receiving) multiple copies of different product types over time. The firm can offer the products to arriving buyers, where each buyer is interested in one product type and has a private valuation drawn independently from a possibly different but known distribution. Our goal is to find an adaptive pricing for serving the buyers that maximizes the expected social-welfare (or revenue) subject to two constraints. First, at any time the total number of sold items of each type is no more than the number of produced items. Second, the total number of sold items does not exceed the total shipping capacity. This problem is a special case of the well-known matroid Bayesian online selection problem studied in [Kleinberg and Weinberg, 2012], when the underlying matroid is laminar. We give the first Polynomial-Time Approximation Scheme (PTAS) for the above problem as well as its generalization to the laminar matroid Bayesian online selection problem when the depth of the laminar family is bounded by a constant. Our approach is based on rounding the solution of a hierarchy of linear programming relaxations that systematically strengthen the commonly used ex-ante linear programming formulation of these problems and approximate the optimum online solution with any degree of accuracy. Our rounding algorithm respects the relaxed constraints of higher-levels of the laminar tree only in expectation, and exploits the negative dependency of the selection rule of lower-levels to achieve the required concentration that guarantees the feasibility with high probability.
Designing and optimizing different flows in networks is a relevant problem in many contexts. While a number of methods have been proposed in the physics and optimal transport literature for the one-commodity case, we lack similar results for the multi-commodity scenario. In this paper we present a model based on optimal transport theory for finding optimal multi-commodity flow configurations on networks. This model introduces a dynamics that regulates the edge conductivities to achieve, at infinite times, a minimum of a Lyapunov functional given by the sum of a convex transport cost and a concave infrastructure cost. We show that the long time asymptotics of this dynamics are the solutions of a standard constrained optimization problem that generalizes the one-commodity framework. Our results provide new insights into the nature and properties of optimal network topologies. In particular, they show that loops can arise as a consequence of distinguishing different flow types, complementing previous results where loops, in the one-commodity case, were obtained as a consequence of imposing dynamical rules to the sources and sinks or when enforcing robustness to damage. Finally, we provide an efficient implementation of our model which convergences faster than standard optimization methods based on gradient descent.
Supply chains are the backbone of the global economy. Disruptions to them can be costly. Centrally managed supply chains invest in ensuring their resilience. Decentralized supply chains, however, must rely upon the self-interest of their individual components to maintain the resilience of the entire chain. We examine the incentives that independent self-interested agents have in forming a resilient supply chain network in the face of production disruptions and competition. In our model, competing suppliers are subject to yield uncertainty (they deliver less than ordered) and congestion (lead time uncertainty or, soft supply caps). Competing retailers must decide which suppliers to link to based on both price and reliability. In the presence of yield uncertainty only, the resulting supply chain networks are sparse. Retailers concentrate their links on a single supplier, counter to the idea that they should mitigate yield uncertainty by diversifying their supply base. This happens because retailers benefit from supply variance. It suggests that competition will amplify output uncertainty. When congestion is included as well, the resulting networks are denser and resemble the bipartite expander graphs that have been proposed in the supply chain literature, thereby, providing the first example of endogenous formation of resilient supply chain networks, without resilience being explicitly encoded in payoffs. Finally, we show that a suppliers investments in improved yield can make it worse off. This happens because high production output saturates the market, which, in turn lowers prices and profits for participants.