ترغب بنشر مسار تعليمي؟ اضغط هنا

188 - Weiyu Chen , Hisao Ishibuchi , 2021
Subset selection is an important component in evolutionary multiobjective optimization (EMO) algorithms. Clustering, as a classic method to group similar data points together, has been used for subset selection in some fields. However, clustering-bas ed methods have not been evaluated in the context of subset selection from solution sets obtained by EMO algorithms. In this paper, we first review some classic clustering algorithms. We also point out that another popular subset selection method, i.e., inverted generational distance (IGD)-based subset selection, can be viewed as clustering. Then, we perform a comprehensive experimental study to evaluate the performance of various clustering algorithms in different scenarios. Experimental results are analyzed in detail, and some suggestions about the use of clustering algorithms for subset selection are derived. Additionally, we demonstrate that decision makers preference can be introduced to clustering-based subset selection.
Hypervolume is widely used in the evolutionary multi-objective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $mu$ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigati ng the distribution of the solution set with the largest hypervolume is an important topic in EMO, which is the so-called hypervolume optimal $mu$-distribution. Theoretical results have shown that the $mu$ solutions are uniformly distributed on a linear Pareto front in two dimensions. However, the $mu$ solutions are not always uniformly distributed on a single-line Pareto front in three dimensions. They are only uniform when the single-line Pareto front has one constant objective. In this paper, we further investigate the hypervolume optimal $mu$-distribution in three dimensions. We consider the line- and plane-based Pareto fronts. For the line-based Pareto fronts, we extend the single-line Pareto front to two-line and three-line Pareto fronts, where each line has one constant objective. For the plane-based Pareto fronts, the linear triangular and inverted triangular Pareto fronts are considered. First, we show that the $mu$ solutions are not always uniformly distributed on the line-based Pareto fronts. The uniformity depends on how the lines are combined. Then, we show that a uniform solution set on the plane-based Pareto front is not always optimal for hypervolume maximization. It is locally optimal with respect to a $(mu+1)$ selection scheme. Our results can help researchers in the community to better understand and utilize the hypervolume indicator.
143 - Weiyu Chen , Hisao Ishibuchi , 2021
Subset selection is an interesting and important topic in the field of evolutionary multi-objective optimization (EMO). Especially, in an EMO algorithm with an unbounded external archive, subset selection is an essential post-processing procedure to select a pre-specified number of solutions as the final result. In this paper, we discuss the efficiency of greedy subset selection for the hypervolume, IGD and IGD+ indicators. Greedy algorithms usually efficiently handle subset selection. However, when a large number of solutions are given (e.g., subset selection from tens of thousands of solutions in an unbounded external archive), they often become time-consuming. Our idea is to use the submodular property, which is known for the hypervolume indicator, to improve their efficiency. First, we prove that the IGD and IGD+ indicators are also submodular. Next, based on the submodular property, we propose an efficient greedy inclusion algorithm for each indicator. Then, we demonstrate through computational experiments that the proposed algorithms are much faster than the standard greedy subset selection algorithms.
106 - Weiyu Chen , Hisao Ishibuhci , 2020
Subset selection is a popular topic in recent years and a number of subset selection methods have been proposed. Among those methods, hypervolume subset selection is widely used. Greedy hypervolume subset selection algorithms can achieve good approxi mations to the optimal subset. However, when the candidate set is large (e.g., an unbounded external archive with a large number of solutions), the algorithm is very time-consuming. In this paper, we propose a new lazy greedy algorithm exploiting the submodular property of the hypervolume indicator. The core idea is to avoid unnecessary hypervolume contribution calculation when finding the solution with the largest contribution. Experimental results show that the proposed algorithm is hundreds of times faster than the original greedy inclusion algorithm and several times faster than the fastest known greedy inclusion algorithm on many test problems.
In this paper, a backscatter cooperation (BC) scheme is proposed for non-orthogonal multiple access (NOMA) downlink transmission. The key idea is to enable one user to split and then backscatter part of its received signals to improve the reception a t another user. To evaluate the performance of the proposed BC-NOMA scheme, three benchmark schemes are introduced. They are the non-cooperation (NC)-NOMA scheme, the conventional relaying (CR)-NOMA scheme, and the incremental relaying (IR)-NOMA scheme. For all these schemes, the analytical expressions of the minimum total power to avoid information outage are derived, based on which their respective outage performance, expected rates, and diversity-multiplexing trade-off (DMT) are investigated. Analytical results show that the proposed BC-NOMA scheme strictly outperforms the NC-NOMA scheme in terms of all the three metrics. Furthermore, theoretical analyses are validated via Monte-Carlo simulations. It is shown that unlike the CR-NOMA scheme and the IR-NOMA scheme, the proposed BC-NOMA scheme can enhance the transmission reliability without impairing the transmission rate, which makes backscattering an appealing solution to cooperative NOMA downlinks.
Latent factor models are the driving forces of the state-of-the-art recommender systems, with an important insight of vectorizing raw input features into dense embeddings. The dimensions of different feature embeddings are often set to a same value e mpirically, which limits the predictive performance of latent factor models. Existing works have proposed heuristic or reinforcement learning-based methods to search for mixed feature embedding dimensions. For efficiency concern, these methods typically choose embedding dimensions from a restricted set of candidate dimensions. However, this restriction will hurt the flexibility of dimension selection, leading to suboptimal performance of search results. In this paper, we propose Differentiable Neural Input Search (DNIS), a method that searches for mixed feature embedding dimensions in a more flexible space through continuous relaxation and differentiable optimization. The key idea is to introduce a soft selection layer that controls the significance of each embedding dimension, and optimize this layer according to models validation performance. DNIS is model-agnostic and thus can be seamlessly incorporated with existing latent factor models for recommendation. We conduct experiments with various architectures of latent factor models on three public real-world datasets for rating prediction, Click-Through-Rate (CTR) prediction, and top-k item recommendation. The results demonstrate that our method achieves the best predictive performance compared with existing neural input search approaches with fewer embedding parameters and less time cost.
Recently, the discretization of decision and objective spaces has been discussed in the literature. In some studies, it is shown that the decision space discretization improves the performance of evolutionary multi-objective optimization (EMO) algori thms on continuous multi-objective test problems. In other studies, it is shown that the objective space discretization improves the performance on combinatorial multi-objective problems. However, the effect of the simultaneous discretization of both spaces has not been examined in the literature. In this paper, we examine the effects of the decision space discretization, objective space discretization and simultaneous discretization on the performance of NSGA-II through computational experiments on the DTLZ and WFG problems. Using various settings about the number of decision variables and the number of objectives, our experiments are performed on four types of problems: standard problems, large-scale problems, many-objective problems, and large-scale many-objective problems. We show that the decision space discretization has a positive effect for large-scale problems and the objective space discretization has a positive effect for many-objective problems. We also show the discretization of both spaces is useful for large-scale many-objective problems.
The increasing popularity of e-learning has created demand for improving online education through techniques such as predictive analytics and content recommendations. In this paper, we study learner outcome predictions, i.e., predictions of how they will perform at the end of a course. We propose a novel Two Branch Decision Network for performance prediction that incorporates two important factors: how learners progress through the course and how the content progresses through the course. We combine clickstream features which log every action the learner takes while learning, and textual features which are generated through pre-trained GloVe word embeddings. To assess the performance of our proposed network, we collect data from a short online course designed for corporate training and evaluate both neural network and non-neural network based algorithms on it. Our proposed algorithm achieves 95.7% accuracy and 0.958 AUC score, which outperforms all other models. The results also indicate the combination of behavior features and text features are more predictive than behavior features only and neural network models are powerful in capturing the joint relationship between user behavior and course content.
In this paper, we propose an impartial simultaneous wireless information and power transfer (SWIPT)-assisted cooperation mechanism for a non-orthogonal multiple access (NOMA) downlink scenario. Specifically, both a cell-center user and a cell-edge us er apply the power-splitting technique and utilize the harvested energy to forward the other users information on the premise of successful decoding of their own information. Both analytical and numerical results show that the proposed impartial user cooperation mechanism (IUCM) outperforms the traditional partial cooperation mechanism in terms of outage probability, diversity order and diversity-multiplexing trade-off (DMT). For comparison, we further incorporate the IUCM into an orthogonal frequency-division multiple access (OFDMA) framework, which is shown to preserve the same diversity order, while has a worse but more flexible DMT performance in comparison with the IUCM in the NOMA framework. Although the IUCM in OFDMA has a worse outage performance, it is proved that it has the same optimal system outage probability with the IUCM in NOMA when the relaying channel between the two users is error-free.
Various factorization-based methods have been proposed to leverage second-order, or higher-order cross features for boosting the performance of predictive models. They generally enumerate all the cross features under a predefined maximum order, and t hen identify useful feature interactions through model training, which suffer from two drawbacks. First, they have to make a trade-off between the expressiveness of higher-order cross features and the computational cost, resulting in suboptimal predictions. Second, enumerating all the cross features, including irrelevant ones, may introduce noisy feature combinations that degrade model performance. In this work, we propose the Adaptive Factorization Network (AFN), a new model that learns arbitrary-order cross features adaptively from data. The core of AFN is a logarithmic transformation layer to convert the power of each feature in a feature combination into the coefficient to be learned. The experimental results on four real datasets demonstrate the superior predictive performance of AFN against the start-of-the-arts.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا