ترغب بنشر مسار تعليمي؟ اضغط هنا

An Enhancement Algorithm of Cyclic Adaptive Fourier Decomposition

80   0   0.0 ( 0 )
 نشر من قبل Jianzhong Wang
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The paper investigates the complex gradient descent method (CGD) for the best rational approximation of a given order to a function in the Hardy space on the unit disk. It is equivalent to finding the best Blaschke form with free poles. The adaptive Fourier decomposition (AFD) and the cyclic AFD methods in literature are based on the grid search technique. The precision of these methods is limited by the grid spacing. The proposed method employs a fast search algorithm to find the initial for CGD, then finds the target poles by gradient descent optimization. Hence, it can reach higher precision with less computation cost. Its validity and effectiveness are confirmed by several examples.

قيم البحث

اقرأ أيضاً

In the slice Hardy space over the unit ball of quaternions, we introduce the slice hyperbolic backward shift operators $mathcal S_a$ based on the identity $$f=e_alangle f, e_arangle+B_{a}*mathcal S_a f,$$ where $e_a$ denotes the slice normalized Szeg o kernel and $ B_a $ the slice Mobius transformation. By iterating the identity above, the greedy algorithm gives rise to the slice adaptive Fourier decomposition via maximum selection principle. This leads to the slice Takenaka-Malmquist orthonormal system.
Let $E$ be a continuum in the closed unit disk $|z|le 1$ of the complex $z$-plane which divides the open disk $|z| < 1$ into $nge 2$ pairwise non-intersecting simply connected domains $D_k,$ such that each of the domains $D_k$ contains some point $a_ k$ on a prescribed circle $|z| = rho, 0 <rho <1, k=1,...,n,. $ It is shown that for some increasing function $Psi,$ independent of $E$ and the choice of the points $a_k,$ the mean value of the harmonic measures $$ Psi^{-1}[ frac{1}{n} sum_{k=1}^{k} Psi(omega(a_k,E, D_k))] $$ is greater than or equal to the harmonic measure $omega(rho, E^*, D^*),,$ where $E^* = {z: z^n in [-1,0] }$ and $D^* ={z: |z|<1, |{rm arg} z| < pi/n} ,.$ This implies, for instance, a solution to a problem of R.W. Barnard, L. Cole, and A. Yu. Solynin concerning a lower estimate of the quantity $inf_{E} max_{k=1,...,n} omega(a_k,E, D_k),$ for arbitrary points of the circle $|z| = rho ,.$ These authors stated this hypothesis in the particular case when the points are equally distributed on the circle $|z| = rho ,.$
An abstract theory of Fourier series in locally convex topological vector spaces is developed. An analog of Fej{e}rs theorem is proved for these series. The theory is applied to distributional solutions of Cauchy-Riemann equations to recover basic re sults of complex analysis. Some classical results of function theory are also shown to be consequences of the series expansion.
Automatic algorithms attempt to provide approximate solutions that differ from exact solutions by no more than a user-specified error tolerance. This paper describes an automatic, adaptive algorithm for approximating the solution to a general linear problem on Hilbert spaces. The algorithm employs continuous linear functionals of the input function, specifically Fourier coefficients. We assume that the Fourier coefficients of the solution decay sufficiently fast, but do not require the decay rate to be known a priori. We also assume that the Fourier coefficients decay steadily, although not necessarily monotonically. Under these assumptions, our adaptive algorithm is shown to produce an approximate solution satisfying the desired error tolerance, without prior knowledge of the norm of the function to be approximated. Moreover, the computational cost of our algorithm is shown to be essentially no worse than that of the optimal algorithm. We provide a numerical experiment to illustrate our algorithm.
Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss. Previous works indicate that the noise is important for the optimization and generalization of deep neural networks, but too much noise will harm the performance of networks. In our paper, we offer a new point of view that self-attention mechanism can help to regulate the noise by enhancing instance-specific information to obtain a better regularization effect. Therefore, we propose an attention-based BN called Instance Enhancement Batch Normalization (IEBN) that recalibrates the information of each channel by a simple linear transformation. IEBN has a good capacity of regulating noise and stabilizing network training to improve generalization even in the presence of two kinds of noise attacks during training. Finally, IEBN outperforms BN with only a light parameter increment in image classification tasks for different network structures and benchmark datasets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا