ترغب بنشر مسار تعليمي؟ اضغط هنا

Imprecise vowel articulation can be observed in people with Parkinsons disease (PD). Acoustic features measuring vowel articulation have been demonstrated to be effective indicators of PD in its assessment. Standard clinical vowel articulation featur es of vowel working space area (VSA), vowel articulation index (VAI) and formants centralization ratio (FCR), are derived the first two formants of the three corner vowels /a/, /i/ and /u/. Conventionally, manual annotation of the corner vowels from speech data is required before measuring vowel articulation. This process is time-consuming. The present work aims to reduce human effort in clinical analysis of PD speech by proposing an automatic pipeline for vowel articulation assessment. The method is based on automatic corner vowel detection using a language universal phoneme recognizer, followed by statistical analysis of the formant data. The approach removes the restrictions of prior knowledge of speaking content and the language in question. Experimental results on a Finnish PD speech corpus demonstrate the efficacy and reliability of the proposed automatic method in deriving VAI, VSA, FCR and F2i/F2u (the second formant ratio for vowels /i/ and /u/). The automatically computed parameters are shown to be highly correlated with features computed with manual annotations of corner vowels. In addition, automatically and manually computed vowel articulation features have comparable correlations with experts ratings on speech intelligibility, voice impairment and overall severity of communication disorder. Language-independence of the proposed approach is further validated on a Spanish PD database, PC-GITA, as well as on TORGO corpus of English dysarthric speech.
Federated Learning (FL) has become an active and promising distributed machine learning paradigm. As a result of statistical heterogeneity, recent studies clearly show that the performance of popular FL methods (e.g., FedAvg) deteriorates dramaticall y due to the client drift caused by local updates. This paper proposes a novel Federated Learning algorithm (called IGFL), which leverages both Individual and Group behaviors to mimic distribution, thereby improving the ability to deal with heterogeneity. Unlike existing FL methods, our IGFL can be applied to both client and server optimization. As a by-product, we propose a new attention-based federated learning in the server optimization of IGFL. To the best of our knowledge, this is the first time to incorporate attention mechanisms into federated optimization. We conduct extensive experiments and show that IGFL can significantly improve the performance of existing federated learning methods. Especially when the distributions of data among individuals are diverse, IGFL can improve the classification accuracy by about 13% compared with prior baselines.
The mining in physics and biology for accelerating the hardcore algorithm to solve non-deterministic polynomial (NP) hard problems has inspired a great amount of special-purpose ma-chine models. Ising machine has become an efficient solver for variou s combinatorial optimizationproblems. As a computing accelerator, large-scale photonic spatial Ising machine have great advan-tages and potentials due to excellent scalability and compact system. However, current fundamentallimitation of photonic spatial Ising machine is the configuration flexibility of problem implementationin the accelerator model. Arbitrary spin interactions is highly desired for solving various NP hardproblems. Moreover, the absence of external magnetic field in the proposed photonic Ising machinewill further narrow the freedom to map the optimization applications. In this paper, we propose anovel quadrature photonic spatial Ising machine to break through the limitation of photonic Isingaccelerator by synchronous phase manipulation in two and three sections. Max-cut problem solutionwith graph order of 100 and density from 0.5 to 1 is experimentally demonstrated after almost 100iterations. We derive and verify using simulation the solution for Max-cut problem with more than1600 nodes and the system tolerance for light misalignment. Moreover, vertex cover problem, modeled as an Ising model with external magnetic field, has been successfully implemented to achievethe optimal solution. Our work suggests flexible problem solution by large-scale photonic spatialIsing machine.
We consider the robust filtering problem for a state-space model with outliers in correlated measurements. We propose a new robust filtering framework to further improve the robustness of conventional robust filters. Specifically, the measurement fit ting error is processed separately during the reweighting procedure, which differs from existing solutions where a jointly processed scheme is involved. Simulation results reveal that, under the same setup, the proposed method outperforms the existing robust filter when the outlier-contaminated measurements are correlated, while it has the same performance as the existing one in the presence of uncorrelated measurements since these two types of robust filters are equivalent under such a circumstance.
In this paper, we study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation for many machine learning problems. For smooth convex loss functions with (non)-smooth regularization, we propose t he first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(epsilon,delta)$-differential privacy ($(epsilon,delta)$-DP). From the viewpoint of theoretical analysis, we use the Gaussian mechanism and the conversion relationship between Renyi Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis for our algorithm. Then we establish a new criterion to prove the convergence of the proposed algorithms including DP-ADMM. We also give the utility analysis of our DP-ADMM. Moreover, we propose an accelerated DP-ADMM (DP-AccADMM) with the Nesterovs acceleration technique. Finally, we conduct numerical experiments on many real-world datasets to show the privacy-utility tradeoff of the two proposed algorithms, and all the comparative analysis shows that DP-AccADMM converges faster and has a better utility than DP-ADMM, when the privacy budget $epsilon$ is larger than a threshold.
Higher-order topological insulators are a new class of topological phases of matter, originally conceived for electrons in solids. It has been suggested that $mathbb{Z}_N$ Berry phase (Berry phase quantized into $2pi/N$) is a useful tool to character ize the symmetry protected topological states, while the experimental evidence is still elusive. Recently, topolectrical circuits have emerged as a simple yet very powerful platform for studying topological physics that are challenging to realize in condensed matter systems. Here, we present the first experimental observation of second-order corner states characterized by $mathbb{Z}_3$ Berry phase in topolectrical circuits. We demonstrate theoretically and experimentally that the localized second-order topological states are protected by a generalized chiral symmetry of tripartite lattices, and they are pinned to zero energy. By introducing extra capacitors within sublattices in the circuit, we are able to examine the robustness of the zero modes against both chiral-symmetry conserving and breaking disturbances. Our work paves the way for testing exotic topological band theory by electrical-circuit experiments.
Recently, research on accelerated stochastic gradient descent methods (e.g., SVRG) has made exciting progress (e.g., linear convergence for strongly convex problems). However, the best-known methods (e.g., Katyusha) requires at least two auxiliary va riables and two momentum parameters. In this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method, in which we design a novel update rule with the Nesterovs momentum and incorporate the technique of growing epoch size. FSVRG has only one auxiliary variable and one momentum weight, and thus it is much simpler and has much lower per-iteration complexity. We prove that FSVRG achieves linear convergence for strongly convex problems and the optimal $mathcal{O}(1/T^2)$ convergence rate for non-strongly convex problems, where $T$ is the number of outer-iterations. We also extend FSVRG to directly solve the problems with non-smooth component functions, such as SVM. Finally, we empirically study the performance of FSVRG for solving various machine learning problems such as logistic regression, ridge regression, Lasso and SVM. Our results show that FSVRG outperforms the state-of-the-art stochastic methods, including Katyusha.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا