ترغب بنشر مسار تعليمي؟ اضغط هنا

97 - Qi Tao , Yong Liu , Yan Jiang 2021
In this paper, we develop an oscillation free local discontinuous Galerkin (OFLDG) method for solving nonlinear degenerate parabolic equations. Following the idea of our recent work [J. Lu, Y. Liu, and C.-W. Shu, SIAM J. Numer. Anal. 59(2021), pp. 12 99-1324.], we add the damping terms to the LDG scheme to control the spurious oscillations when solutions have a large gradient. The $L^2$-stability and optimal priori error estimates for the semi-discrete scheme are established. The numerical experiments demonstrate that the proposed method maintains the high-order accuracy and controls the spurious oscillations well.
We propose a unified model for three inter-related tasks: 1) to textit{separate} individual sound sources from a mixed music audio, 2) to textit{transcribe} each sound source to MIDI notes, and 3) totextit{ synthesize} new pieces based on the timbre of separated sources. The model is inspired by the fact that when humans listen to music, our minds can not only separate the sounds of different instruments, but also at the same time perceive high-level representations such as score and timbre. To mirror such capability computationally, we designed a pitch-timbre disentanglement module based on a popular encoder-decoder neural architecture for source separation. The key inductive biases are vector-quantization for pitch representation and pitch-transformation invariant for timbre representation. In addition, we adopted a query-by-example method to achieve textit{zero-shot} learning, i.e., the model is capable of doing source separation, transcription, and synthesis for textit{unseen} instruments. The current design focuses on audio mixtures of two monophonic instruments. Experimental results show that our model outperforms existing multi-task baselines, and the transcribed score serves as a powerful auxiliary for separation tasks.
Sparse Principal Component Analysis (SPCA) is widely used in data processing and dimension reduction; it uses the lasso to produce modified principal components with sparse loadings for better interpretability. However, sparse PCA never considers an additional grouping structure where the loadings share similar coefficients (i.e., feature grouping), besides a special group with all coefficients being zero (i.e., feature selection). In this paper, we propose a novel method called Feature Grouping and Sparse Principal Component Analysis (FGSPCA) which allows the loadings to belong to disjoint homogeneous groups, with sparsity as a special case. The proposed FGSPCA is a subspace learning method designed to simultaneously perform grouping pursuit and feature selection, by imposing a non-convex regularization with naturally adjustable sparsity and grouping effect. To solve the resulting non-convex optimization problem, we propose an alternating algorithm that incorporates the difference-of-convex programming, augmented Lagrange and coordinate descent methods. Additionally, the experimental results on real data sets show that the proposed FGSPCA benefits from the grouping effect compared with methods without grouping effect.
Although many techniques have been applied to matrix factorization (MF), they may not fully exploit the feature structure. In this paper, we incorporate the grouping effect into MF and propose a novel method called Robust Matrix Factorization with Gr ouping effect (GRMF). The grouping effect is a generalization of the sparsity effect, which conducts denoising by clustering similar values around multiple centers instead of just around 0. Compared with existing algorithms, the proposed GRMF can automatically learn the grouping structure and sparsity in MF without prior knowledge, by introducing a naturally adjustable non-convex regularization to achieve simultaneous sparsity and grouping effect. Specifically, GRMF uses an efficient alternating minimization framework to perform MF, in which the original non-convex problem is first converted into a convex problem through Difference-of-Convex (DC) programming, and then solved by Alternating Direction Method of Multipliers (ADMM). In addition, GRMF can be easily extended to the Non-negative Matrix Factorization (NMF) settings. Extensive experiments have been conducted using real-world data sets with outliers and contaminated noise, where the experimental results show that GRMF has promoted performance and robustness, compared to five benchmark algorithms.
Estimation of the precision matrix (or inverse covariance matrix) is of great importance in statistical data analysis. However, as the number of parameters scales quadratically with the dimension p, computation becomes very challenging when p is larg e. In this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the $ell_1$ penalized D-trace loss, with each subproblem being solved by a second-order algorithm. In each iteration of our algorithm, we are able to greatly reduce the number of variables in the problem based on the Karush-Kuhn-Tucker (KKT) conditions and the sparse structure of the estimated precision matrix in the previous iteration. As a result, our algorithm is capable of handling datasets with very high dimensions that may go beyond the capacity of the existing methods. Moreover, for the sub-problem in each iteration, other than solving the primal problem directly, we develop a semismooth Newton augmented Lagrangian algorithm with global linear convergence on the dual problem to improve the efficiency. Theoretical properties of our proposed algorithm have been established. In particular, we show that the convergence rate of our algorithm is asymptotically superlinear. The high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state-of-the-art solvers.
By definition, reciprocal matrices are tridiagonal $n$-by-$n$ matrices $A$ with constant main diagonal and such that $a_{i,i+1}a_{i+1,i}=1$ for $i=1,ldots,n-1$. For $nleq 6$, we establish criteria under which the numerical range generating curves (al so called Kippenhahn curves) of such matrices consist of elliptical components only. As a corollary, we also provide a complete description of higher-rank numerical ranges when the criteria are met.
Consider the transmission eigenvalue problem [ (Delta+k^2mathbf{n}^2) w=0, (Delta+k^2)v=0 mbox{in} Omega;quad w=v, partial_ u w=partial_ u v=0 mbox{on} partialOmega. ] It is shown in [12] that there exists a sequence of eigenfunctions $(w_m, v_ m)_{minmathbb{N}}$ associated with $k_mrightarrow infty$ such that either ${w_m}_{minmathbb{N}}$ or ${v_m}_{minmathbb{N}}$ are surface-localized, depending on $mathbf{n}>1$ or $0<mathbf{n}<1$. In this paper, we discover a new type of surface-localized transmission eigenmodes by constructing a sequence of transmission eigenfunctions $(w_m, v_m)_{minmathbb{N}}$ associated with $k_mrightarrow infty$ such that both ${w_m}_{minmathbb{N}}$ and ${v_m}_{minmathbb{N}}$ are surface-localized, no matter $mathbf{n}>1$ or $0<mathbf{n}<1$. Though our study is confined within the radial geometry, the construction is subtle and technical.
Learning based representation has become the key to the success of many computer vision systems. While many 3D representations have been proposed, it is still an unaddressed problem how to represent a dynamically changing 3D object. In this paper, we introduce a compositional representation for 4D captures, i.e. a deforming 3D object over a temporal span, that disentangles shape, initial state, and motion respectively. Each component is represented by a latent code via a trained encoder. To model the motion, a neural Ordinary Differential Equation (ODE) is trained to update the initial state conditioned on the learned motion code, and a decoder takes the shape code and the updated state code to reconstruct the 3D model at each time stamp. To this end, we propose an Identity Exchange Training (IET) strategy to encourage the network to learn effectively decoupling each component. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art deep learning based methods on 4D reconstruction, and significantly improves on various tasks, including motion transfer and completion.
Surface terminations for 2D MXene have dramatic impacts on physicochemical properties. The commonly etching methods usually introduce -F surface termination or metallic into MXene. Here, we present a new molten salt assisted electrochemical etching ( MS-E-etching) method to synthesize fluorine-free Ti3C2Tx without metallics. Due to performing electrons as reaction agent, the cathode reduction and anode etching can be spatially isolated, thus no metallic presents in Ti3C2Tx product. Moreover, the Tx surface terminations can be directly modified from -Cl to -O and/or -S in one pot process. The obtained -O terminated MXenes exhibited capacitance of 225 and 205 F/g at 1 and 10 A/g, confirming high reversibility of redox reactions. This one-pot process greatly shortens the modification procedures as well as enriches the surface functional terminations. More importantly, the recovered salt after synthesis can be recycled and reused, which brands it as a green sustainable method.
70 - Li Zeng , Yan Jiang , Weixin Lu 2020
Subgraph isomorphism is a well-known NP-hard problem which is widely used in many applications, such as social network analysis and knowledge graph query. Its performance is often limited by the inherent hardness. Several insightful works have been d one since 2012, mainly optimizing pruning rules and matching orders to accelerate enumerating all isomorphic subgraphs. Nevertheless, their correctness and performance are not well studied. First, different languages are used in implementation with different compilation flags. Second, experiments are not done on the same platform and the same datasets. Third, some ideas of different works are even complementary. Last but not least, there exist errors when applying some algorithms. In this paper, we address these problems by re-implementing seven representative subgraph isomorphism algorithms as well as their improv
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا