Do you want to publish a course? Click here

Divide-and-Conquer Large Scale Capacitated Arc Routing Problems with Route Cutting Off Decomposition

58   0   0.0 ( 0 )
 Added by Yi Mei
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The capacitated arc routing problem is a very important problem with many practical applications. This paper focuses on the large scale capacitated arc routing problem. Traditional solution optimization approaches usually fail because of their poor scalability. The divide-and-conquer strategy has achieved great success in solving large scale optimization problems by decomposing the original large problem into smaller sub-problems and solving them separately. For arc routing, a commonly used divide-and-conquer strategy is to divide the tasks into subsets, and then solve the sub-problems induced by the task subsets separately. However, the success of a divide-and-conquer strategy relies on a proper task division, which is non-trivial due to the complex interactions between the tasks. This paper proposes a novel problem decomposition operator, named the route cutting off operator, which considers the interactions between the tasks in a sophisticated way. To examine the effectiveness of the route cutting off operator, we integrate it with two state-of-the-art divide-and-conquer algorithms, and compared with the original counterparts on a wide range of benchmark instances. The results show that the route cutting off operator can improve the effectiveness of the decomposition, and lead to significantly better results especially when the problem size is very large and the time budget is very tight.



rate research

Read More

The capacitated arc routing problem (CARP) is a challenging combinatorial optimisation problem abstracted from typical real-world applications, like waste collection and mail delivery. However, few studies considered dynamic changes during the vehicles service, which can make the original schedule infeasible or obsolete. The few existing studies are limited by dynamic scenarios that can suffer single types of dynamic events, and by algorithms that rely on special operators or representations, being unable to benefit from the wealth of contributions provided by the static CARP literature. Here, we provide the first mathematical formulation for dynamic CARP (DCARP) and design a simulation system to execute the CARP solutions and generate DCARP instances with several common dynamic events. We then propose a novel framework able to generalise all existing static CARP optimisation algorithms so that they can cope with DCARP instances. The framework has the option to enhance optimisation performance for DCARP instances based on a restart strategy that makes no use of past history, and a sequence transfer strategy that benefits from past optimisation experience. Empirical studies are conducted on a wide range of DCARP instances. The results highlight the need for tackling dynamic changes and show that the proposed framework significantly improves over existing algorithms.
Spectral clustering is one of the most popular clustering methods. However, how to balance the efficiency and effectiveness of the large-scale spectral clustering with limited computing resources has not been properly solved for a long time. In this paper, we propose a divide-and-conquer based large-scale spectral clustering method to strike a good balance between efficiency and effectiveness. In the proposed method, a divide-and-conquer based landmark selection algorithm and a novel approximate similarity matrix approach are designed to construct a sparse similarity matrix within extremely low cost. Then clustering results can be computed quickly through a bipartite graph partition process. The proposed method achieves the lower computational complexity than most existing large-scale spectral clustering. Experimental results on ten large-scale datasets have demonstrated the efficiency and effectiveness of the proposed methods. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale-invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide-and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.
158 - Zhaoxing Gao , Ruey S. Tsay 2021
This paper proposes a hierarchical approximate-factor approach to analyzing high-dimensional, large-scale heterogeneous time series data using distributed computing. The new method employs a multiple-fold dimension reduction procedure using Principal Component Analysis (PCA) and shows great promises for modeling large-scale data that cannot be stored nor analyzed by a single machine. Each computer at the basic level performs a PCA to extract common factors among the time series assigned to it and transfers those factors to one and only one node of the second level. Each 2nd-level computer collects the common factors from its subordinates and performs another PCA to select the 2nd-level common factors. This process is repeated until the central server is reached, which collects common factors from its direct subordinates and performs a final PCA to select the global common factors. The noise terms of the 2nd-level approximate factor model are the unique common factors of the 1st-level clusters. We focus on the case of 2 levels in our theoretical derivations, but the idea can easily be generalized to any finite number of hierarchies. We discuss some clustering methods when the group memberships are unknown and introduce a new diffusion index approach to forecasting. We further extend the analysis to unit-root nonstationary time series. Asymptotic properties of the proposed method are derived for the diverging dimension of the data in each computing unit and the sample size $T$. We use both simulated data and real examples to assess the performance of the proposed method in finite samples, and compare our method with the commonly used ones in the literature concerning the forecastability of extracted factors.
We consider the classic School Bus Routing Problem (SBRP) with a multi modal generalization, where students are either picked up by a fleet of school buses or transported by an alternate transportation mode, subject to a set of constraints. The constraints that are typically imposed for school buses are a maximum fleet size, a maximum walking distance to a pickup point and a maximum commute time for each student. This is a special case of the Vehicle Routing Problem (VRP) with a common destination. We propose a decomposition approach for solving this problem based on the existing notion of a shareability network, which has been used recently in the context of dynamic ridepooling problems. Moreover, we simplify the problem by introducing the connection between the SBRP and the weighted set covering problem (WSCP). To scale this method to large-scale problem instances, we propose i) a node compression method for the shareability network based decomposition approach, and ii) heuristic-based edge compression techniques that perform well in practice. We show that the compressed problem leads to an Integer Linear Programming (ILP) of reduced dimensionality that can be solved efficiently using off-the-shelf ILP solvers. Numerical experiments on small-scale, large-scale and benchmark networks are used to evaluate the performance of our approach and compare it to existing large-scale SBRP solving techniques.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا