ترغب بنشر مسار تعليمي؟ اضغط هنا

Vehicle Re-Identification (Re-ID) aims to identify the same vehicle across different cameras, hence plays an important role in modern traffic management systems. The technical challenges require the algorithms must be robust in different views, resol ution, occlusion and illumination conditions. In this paper, we first analyze the main factors hindering the Vehicle Re-ID performance. We then present our solutions, specifically targeting the dataset Track 2 of the 5th AI City Challenge, including (1) reducing the domain gap between real and synthetic data, (2) network modification by stacking multi heads with attention mechanism, (3) adaptive loss weight adjustment. Our method achieves 61.34% mAP on the private CityFlow testset without using external dataset or pseudo labeling, and outperforms all previous works at 87.1% mAP on the Veri benchmark. The code is available at https://github.com/cybercore-co-ltd/track2_aicity_2021.
In this work, we introduce a nonlinear Lanchester model of NCW-type and study a problem of finding the optimal fire allocation for this model. A Blue party $B$ will fight against a Red party consisting of $A$ and $R$, where $A$ is an independent forc e and $R$ fights with supports from a supply unit $N$. A battle may consist of several stages but we consider the problem of finding optimal fire allocation for $B$ in the first stage only. Optimal fire allocation is a set of three non-negative numbers whose sum equals to one, such that the remaining force of $B$ is maximal at any instants. In order to tackle this problem, we introduce the notion of textit{threatening rates} which are computed for $A, R, N$ at the beginning of the battle. Numerical illustrations are presented to justify the theoretical findings.
Quantum annealing (QA) is a quantum computing algorithm that works on the principle of Adiabatic Quantum Computation (AQC), and it has shown significant computational advantages in solving combinatorial optimization problems such as vehicle routing p roblems (VRP) when compared to classical algorithms. This paper presents a QA approach for solving a variant VRP known as multi-depot capacitated vehicle routing problem (MDCVRP). This is an NP-hard optimization problem with real-world applications in the fields of transportation, logistics, and supply chain management. We consider heterogeneous depots and vehicles with different capacities. Given a set of heterogeneous depots, the number of vehicles in each depot, heterogeneous depot/vehicle capacities, and a set of spatially distributed customer locations, the MDCVRP attempts to identify routes of various vehicles satisfying the capacity constraints such as that all the customers are served. We model MDCVRP as a quadratic unconstrained binary optimization (QUBO) problem, which minimizes the overall distance traveled by all the vehicles across all depots given the capacity constraints. Furthermore, we formulate a QUBO model for dynamic version of MDCVRP known as D-MDCVRP, which involves dynamic rerouting of vehicles to real-time customer requests. We discuss the problem complexity and a solution approach to solving MDCVRP and D-MDCVRP on quantum annealing hardware from D-Wave.
Bayesian Networks (BN) are probabilistic graphical models that are widely used for uncertainty modeling, stochastic prediction and probabilistic inference. A Quantum Bayesian Network (QBN) is a quantum version of the Bayesian network that utilizes th e principles of quantum mechanical systems to improve the computational performance of various analyses. In this paper, we experimentally evaluate the performance of QBN on various IBM QX hardware against Qiskit simulator and classical analysis. We consider a 4-node BN for stock prediction for our experimental evaluation. We construct a quantum circuit to represent the 4-node BN using Qiskit, and run the circuit on nine IBM quantum devices: Yorktown, Vigo, Ourense, Essex, Burlington, London, Rome, Athens and Melbourne. We will also compare the performance of each device across the four levels of optimization performed by the IBM Transpiler when mapping a given quantum circuit to a given device. We use the root mean square percentage error as the metric for performance comparison of various hardware.
Probabilistic graphical models such as Bayesian networks are widely used to model stochastic systems to perform various types of analysis such as probabilistic prediction, risk analysis, and system health monitoring, which can become computationally expensive in large-scale systems. While demonstrations of true quantum supremacy remain rare, quantum computing applications managing to exploit the advantages of amplitude amplification have shown significant computational benefits when compared against their classical counterparts. We develop a systematic method for designing a quantum circuit to represent a generic discrete Bayesian network with nodes that may have two or more states, where nodes with more than two states are mapped to multiple qubits. The marginal probabilities associated with root nodes (nodes without any parent nodes) are represented using rotation gates, and the conditional probability tables associated with non-root nodes are represented using controlled rotation gates. The controlled rotation gates with more than one control qubit are represented using ancilla qubits. The proposed approach is demonstrated for three examples: a 4-node oil company stock prediction, a 10-node network for liquidity risk assessment, and a 9-node naive Bayes classifier for bankruptcy prediction. The circuits were designed and simulated using Qiskit, a quantum computing platform that enables simulations and also has the capability to run on real quantum hardware. The results were validated against those obtained from classical Bayesian network implementations.
It has been empirically observed that the flatness of minima obtained from training deep networks seems to correlate with better generalization. However, for deep networks with positively homogeneous activations, most measures of sharpness/flatness a re not invariant to rescaling of the network parameters, corresponding to the same function. This means that the measure of flatness/sharpness can be made as small or as large as possible through rescaling, rendering the quantitative measures meaningless. In this paper we show that for deep networks with positively homogenous activations, these rescalings constitute equivalence relations, and that these equivalence relations induce a quotient manifold structure in the parameter space. Using this manifold structure and an appropriate metric, we propose a Hessian-based measure for flatness that is invariant to rescaling. We use this new measure to confirm the proposition that Large-Batch SGD minima are indeed sharper than Small-Batch SGD minima.
In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, lar ge step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhood of the optimal solutions). We then empirically demonstrate that these assumptions hold for logistic regression and standard deep neural networks on classical data sets. Thus our analysis helps to explain when efficient behavior can be expected from the SGD method in training classification models and deep neural networks.
Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quant um context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing.
In this paper, we propose a general collaborative sparse representation framework for multi-sensor classification, which takes into account the correlations as well as complementary information between heterogeneous sensors simultaneously while consi dering joint sparsity within each sensors observations. We also robustify our models to deal with the presence of sparse noise and low-rank interference signals. Specifically, we demonstrate that incorporating the noise or interference signal as a low-rank component in our models is essential in a multi-sensor classification problem when multiple co-located sources/sensors simultaneously record the same physical event. We further extend our frameworks to kernelized models which rely on sparsely representing a test sample in terms of all the training samples in a feature space induced by a kernel function. A fast and efficient algorithm based on alternative direction method is proposed where its convergence to an optimal solution is guaranteed. Extensive experiments are conducted on several real multi-sensor data sets and results are compared with the conventional classifiers to verify the effectiveness of the proposed methods.
This paper studies the problem of accurately recovering a sparse vector $beta^{star}$ from highly corrupted linear measurements $y = X beta^{star} + e^{star} + w$ where $e^{star}$ is a sparse error vector whose nonzero entries may be unbounded and $w $ is a bounded noise. We propose a so-called extended Lasso optimization which takes into consideration sparse prior information of both $beta^{star}$ and $e^{star}$. Our first result shows that the extended Lasso can faithfully recover both the regression as well as the corruption vector. Our analysis relies on the notion of extended restricted eigenvalue for the design matrix $X$. Our second set of results applies to a general class of Gaussian design matrix $X$ with i.i.d rows $oper N(0, Sigma)$, for which we can establish a surprising result: the extended Lasso can recover exact signed supports of both $beta^{star}$ and $e^{star}$ from only $Omega(k log p log n)$ observations, even when the fraction of corruption is arbitrarily close to one. Our analysis also shows that this amount of observations required to achieve exact signed support is indeed optimal.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا