ترغب بنشر مسار تعليمي؟ اضغط هنا

The belief function in Dempster Shafer evidence theory can express more information than the traditional Bayesian distribution. It is widely used in approximate reasoning, decision-making and information fusion. However, its power exponential explosi on characteristics leads to the extremely high computational complexity when handling large amounts of elements in classic computers. In order to solve the problem, we encode the basic belief assignment (BBA) into quantum states, which makes each qubit correspond to control an element. Besides the high efficiency, this quantum expression is very conducive to measure the similarity between two BBAs, and the measuring quantum algorithm we come up with has exponential acceleration theoretically compared to the corresponding classical algorithm. In addition, we simulate our quantum version of BBA on Qiskit platform, which ensures the rationality of our algorithm experimentally. We believe our results will shed some light on utilizing the characteristic of quantum computation to handle belief function more conveniently.
288 - Qinyuan Wu , Yong Deng 2021
Categorization is a significant task in decision-making, which is a key part of human behavior. An interference effect is caused by categorization in some cases, which breaks the total probability principle. A negation quantum model (NQ model) is dev eloped in this article to predict the interference. Taking the advantage of negation to bring more information in the distribution from a different perspective, the proposed model is a combination of the negation of a probability distribution and the quantum decision model. Information of the phase contained in quantum probability and the special calculation method to it can easily represented the interference effect. The results of the proposed NQ model is closely to the real experiment data and has less error than the existed models.
141 - Jixiang Deng , Yong Deng 2021
Because of the efficiency of modeling fuzziness and vagueness, Z-number plays an important role in real practice. However, Z-numbers, defined in the real number field, lack the ability to process the quantum information in quantum environment. It is reasonable to generalize Z-number into its quantum counterpart. In this paper, we propose quantum Z-numbers (QZNs), which are the quantum generalization of Z-numbers. In addition, seven basic quantum fuzzy operations of QZNs and their corresponding quantum circuits are presented and illustrated by numerical examples. Moreover, based on QZNs, a novel quantum multi-attributes decision making (MADM) algorithm is proposed and applied in medical diagnosis. The results show that, with the help of quantum computation, the proposed algorithm can make diagnoses correctly and efficiently.
72 - Yong Deng , Min Dong 2021
For a caching system with multiple users, we aim to characterize the memory-rate tradeoff for caching with uncoded cache placement, under nonuniform file popularity. Focusing on the modified coded caching scheme (MCCS) recently proposed by Yu, etal., we formulate the cache placement optimization problem for the MCCS to minimize the average delivery rate under nonuniform file popularity, restricting to a class of popularity-first placements. We then present two information-theoretic lower bounds on the average rate for caching with uncoded placement, one for general cache placements and the other restricted to the popularity-first placements. By comparing the average rate of the optimized MCCS with the lower bounds, we prove that the optimized MCCS attains the general lower bound for the two-user case, providing the exact memory-rate tradeoff. Furthermore, it attains the popularity-first-based lower bound for the case of general K users with distinct file requests. In these two cases, our results also reveal that the popularity-first placement is optimal for the MCCS, and zero-padding used in coded delivery incurs no loss of optimality. For the case of K users with redundant file requests, our analysis shows that there may exist a gap between the optimized MCCS and the lower bounds due to zero-padding. We next fully characterize the optimal popularity-first cache placement for the MCCS, which is shown to possess a simple file-grouping structure and can be computed via an efficient algorithm using closed-form expressions. Finally, we extend our study to accommodate both nonuniform file popularity and sizes, where we show that the optimized MCCS attains the lower bound for the two-user case, providing the exact memory-rate tradeoff. Numerical results show that, for general settings, the gap between the optimized MCCS and the lower bound only exists in limited cases and is very small.
In this paper, we propose a progressive margin loss (PML) approach for unconstrained facial age classification. Conventional methods make strong assumption on that each class owns adequate instances to outline its data distribution, likely leading to bias prediction where the training samples are sparse across age classes. Instead, our PML aims to adaptively refine the age label pattern by enforcing a couple of margins, which fully takes in the in-between discrepancy of the intra-class variance, inter-class variance and class center. Our PML typically incorporates with the ordinal margin and the variational margin, simultaneously plugging in the globally-tuned deep neural network paradigm. More specifically, the ordinal margin learns to exploit the correlated relationship of the real-world age labels. Accordingly, the variational margin is leveraged to minimize the influence of head classes that misleads the prediction of tailed samples. Moreover, our optimization carefully seeks a series of indicator curricula to achieve robust and efficient model training. Extensive experimental results on three face aging datasets demonstrate that our PML achieves compelling performance compared to state of the arts. Code will be made publicly.
359 - Qianli Zhou , Yong Deng 2020
For a certain moment, the information volume represented in a probability space can be accurately measured by Shannon entropy. But in real life, the results of things usually change over time, and the prediction of the information volume contained in the future is still an open question. Deng entropy proposed by Deng in recent years is widely applied on measuring the uncertainty, but its physical explanation is controversial. In this paper, we give Deng entropy a new explanation based on the fractal idea, and proposed its generalization called time fractal-based (TFB) entropy. The TFB entropy is recognized as predicting the uncertainty over a period of time by splitting times, and its maximum value, called higher order information volume of mass function (HOIVMF), can express more uncertain information than all of existing methods.
53 - Qianli Zhou , Yong Deng 2020
The total uncertainty measurement of basic probability assignment (BPA) in evidence theory has always been an open issue. Although many scholars have put forward various measures and requirements of bodies of evidence (BoE), none of them are widely r ecognized. So in order to express the uncertainty in evidence theory, transforming basic probability assignment (BPA) into probability distribution is a widely used method, but all the previous methods of probability transformation are directly allocating focal elements in evidence theory to their elements without specific transformation process. Based on above, this paper simulates the pignistic probability transformation (PPT) process based on the idea of fractal, making the PPT process and the information volume lost during transformation more intuitive. Then apply this idea to the total uncertainty measure in evidence theory. A new belief entropy called Fractal-based (FB) entropy is proposed, which is the first time to apply fractal idea in belief entropy. After verification, the new entropy is superior to all existing total uncertainty measurements.
406 - Yusheng Huang 2020
Physarum polycephalum inspired algorithm (PPA), also known as the Physarum Solver, has attracted great attention. By modelling real-world problems into a graph with network flow and adopting proper equations to calculate the distance between the node s in the graph, PPA could be used to solve system optimization problems or user equilibrium problems. However, some problems such as the maximum flow (MF) problem, minimum-cost-maximum-flow (MCMF) problem, and link-capacitated traffic assignment problem (CTAP), require the flow flowing through links to follow capacity constraints. Motivated by the lack of related PPA-based research, a novel framework, the capacitated physarum polycephalum inspired algorithm (CPPA), is proposed to allow capacity constraints toward link flow in the PPA. To prove the validity of the CPPA, we developed three applications of the CPPA, i.e., the CPPA for the MF problem (CPPA-MF), the CPPA for the MCFC problem, and the CPPA for the link-capacitated traffic assignment problem (CPPA-CTAP). In the experiments, all the applications of the CPPA solve the problems successfully. Some of them demonstrate efficiency compared to the baseline algorithms. The experimental results prove the validation of using the CPPA framework to control link flow in the PPA is valid. The CPPA is also very robust and easy to implement since it could be successfully applied in three different scenarios. The proposed method shows that: having the ability to control the maximum among flow flowing through links in the PPA, the CPPA could tackle more complex real-world problems in the future.
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenien t or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice. In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.
Jets often occur repeatedly from almost the same location. In this paper, a complex solar jet was observed with two phases to the west of NOAA AR 11513 on July 2nd, 2012. If it had been observed at only moderate resolution, the two phases and their p oints of origin would have been regarded as identical. However, at high resolution we find the two phases merge into one another and the accompanying footpoint brightenings occur at different locations. The phases originate from different magnetic patches rather than being one phase originating from the same patch. Photospheric line of sight (LOS) magnetograms show that the bases of the two phases lie in two different patches of magnetic flux which decrease in size during the occurrence of the two phases. Based on these observations, we suggest the driving mechanism of the two successive phases is magnetic cancellation of two separate magnetic fragments with an opposite polarity fragment between them.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا