Do you want to publish a course? Click here

Space-borne gravitational wave detectors, such as (e)LISA, are designed to operate in the low-frequency band (mHz to Hz), where there is a variety of gravitational wave sources of great scientific value. To achieve the extraordinary sensitivity of these detector, the precise synchronization of the clocks on the separate spacecraft and the accurate determination of the interspacecraft distances are important ingredients. In our previous paper (Phys. Rev. D 90, 064016 [2014]), we have described a hybrid-extend Kalman filter with a full state vector to do this job. In this paper, we explore several different state vectors and their corresponding (phenomenological) dynamic models, to reduce the redundancy in the full state vector, to accelerate the algorithm, and to make the algorithm easily extendable to more complicated scenarios.
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two chal- lenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
60 - Jim Jing-Yan Wang 2015
In this paper, we propose the problem of optimizing multivariate performance measures from multi-view data, and an effective method to solve it. This problem has two features: the data points are presented by multiple views, and the target of learning is to optimize complex multivariate performance measures. We propose to learn a linear discriminant functions for each view, and combine them to construct a overall multivariate mapping function for mult-view data. To learn the parameters of the linear dis- criminant functions of different views to optimize multivariate performance measures, we formulate a optimization problem. In this problem, we propose to minimize the complexity of the linear discriminant functions of each view, encourage the consistences of the responses of different views over the same data points, and minimize the upper boundary of a given multivariate performance measure. To optimize this problem, we employ the cutting-plane method in an iterative algorithm. In each iteration, we update a set of constrains, and optimize the mapping function parameter of each view one by one.
149 - Jim Jing-Yan Wang , Xin Gao 2014
Inthischapterwediscusshowtolearnanoptimalmanifoldpresentationto regularize nonegative matrix factorization (NMF) for data representation problems. NMF,whichtriestorepresentanonnegativedatamatrixasaproductoftwolowrank nonnegative matrices, has been a popular method for data representation due to its ability to explore the latent part-based structure of data. Recent study shows that lots of data distributions have manifold structures, and we should respect the manifold structure when the data are represented. Recently, manifold regularized NMF used a nearest neighbor graph to regulate the learning of factorization parameter matrices and has shown its advantage over traditional NMF methods for data representation problems. However, how to construct an optimal graph to present the manifold prop- erly remains a difficultproblem due to the graph modelselection, noisy features, and nonlinear distributed data. In this chapter, we introduce three effective methods to solve these problems of graph construction for manifold regularized NMF. Multiple graph learning is proposed to solve the problem of graph model selection, adaptive graph learning via feature selection is proposed to solve the problem of constructing a graph from noisy features, while multi-kernel learning-based graph construction is used to solve the problem of learning a graph from nonlinearly distributed data.
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reduc- ing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradi- ent descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
In the design flow of integrated circuits, chip-level verification is an important step that sanity checks the performance is as expected. Power grid verification is one of the most expensive and time-consuming steps of chip-level verification, due to its extremely large size. Efficient power grid analysis technology is highly demanded as it saves computing resources and enables faster iteration. In this paper, a topology-base power grid transient analysis algorithm is proposed. Nodal analysis is adopted to analyze the topology which is mathematically equivalent to iteratively solving a positive semi-definite linear equation. The convergence of the method is proved.
Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.
175 - Yan Wang 2014
In this paper, we study almost periodic solutions for semilinear stochastic differential equations driven by L{e}vy noise with exponential dichotomy property. Under suitable conditions on the coefficients, we obtain the existence and uniqueness of bounded solutions. Furthermore, this unique bounded solution is almost periodic in distribution under slightly stronger conditions. We also give two examples to illustrate our results.
135 - Yan Wang , Gerhard Heinzel , 2014
In this paper, we describe a hybrid-extended Kalman filter algorithm to synchronize the clocks and to precisely determine the inter-spacecraft distances for space-based gravitational wave detectors, such as (e)LISA. According to the simulation, the algorithm has significantly improved the ranging accuracy and synchronized the clocks, making the phase-meter raw measurements qualified for time- delay interferometry algorithms.
We study for the first time a three-dimensional octahedron constellation for a space-based gravitational wave detector, which we call the Octahedral Gravitational Observatory (OGO). With six spacecraft the constellation is able to remove laser frequency noise and acceleration disturbances from the gravitational wave signal without needing LISA-like drag-free control, thereby simplifying the payloads and placing less stringent demands on the thrusters. We generalize LISAs time-delay interferometry to displacement-noise free interferometry (DFI) by deriving a set of generators for those combinations of the data streams that cancel laser and acceleration noise. However, the three-dimensional configuration makes orbit selection complicated. So far, only a halo orbit near the Lagrangian point L1 has been found to be stable enough, and this allows only short arms up to 1400 km. We derive the sensitivity curve of OGO with this arm length, resulting in a peak sensitivity of about $2times10^{-23},mathrm{Hz}^{-1/2}$ near 100 Hz. We compare this version of OGO to the present generation of ground-based detectors and to some future detectors. We also investigate the scientific potentials of such a detector, which include observing gravitational waves from compact binary coalescences, the stochastic background and pulsars as well as the possibility to test alternative theories of gravity. We find a mediocre performance level for this short-arm-length detector, between those of initial and advanced ground-based detectors. Thus, actually building a space-based detector of this specific configuration does not seem very efficient. However, when alternative orbits that allow for longer detector arms can be found, a detector with much improved science output could be constructed using the octahedron configuration and DFI solutions demonstrated in this paper. (abridged)
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا