ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimization of drug controlled release from multi-laminated devices based on the modified Tikhonov regularization method

303   0   0.0 ( 0 )
 نشر من قبل Xinming Zhang
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

From the viewpoint of inverse problem, the optimization of drug release based on the multi-laminated drug controlled release devices has been regarded as the solution problem of the diffusion equation initial value inverse problem. In view of the ill-posedness of the corresponding inverse problem, a modified Tikhonov regularization method is proposed by constructing a new regularizing filter function based on the singular value theory of compact operator. The convergence and the optimal asymptotic order of the regularized solution are obtained. Then the classical Tikhonov regularization method and the modified Tikhonov regularization method are applied to the optimization problem of the initial drug concentration distribution. For three various desired release profiles (constant release, linear decrease release and linear increase followed by a constant release profiles), better results can be obtained by using the modified Tikhonov regularization method. The numerical results demonstrate that the modified Tikhonov regularization method not only has the optimal asymptotic order, but also is suitable for the optimization and design of multi-laminated drug controlled release devices.



قيم البحث

اقرأ أيضاً

109 - Jiantang Zhang 2021
With the rapid growth of data, how to extract effective information from data is one of the most fundamental problems. In this paper, based on Tikhonov regularization, we propose an effective method for reconstructing the function and its derivative from scattered data with random noise. Since the noise level is not assumed small, we will use the amount of data for reducing the random error, and use a relatively small number of knots for interpolation. An indicator function for our algorithm is constructed. It indicates where the numerical results are good or may not be good. The corresponding error estimates are obtained. We show how to choose the number of interpolation knots in the reconstruction process for balancing the random errors and interpolation errors. Numerical examples show the effectiveness and rapidity of our method. It should be remarked that the algorithm in this paper can be used for on-line data.
A main drawback of classical Tikhonov regularization is that often the parameters required to apply theoretical results, e.g., the smoothness of the sought-after solution and the noise level, are unknown in practice. In this paper we investigate in n ew detail the residuals in Tikhonov regularization viewed as functions of the regularization parameter. We show that the residual carries, with some restrictions, the information on both the unknown solution and the noise level. By calculating approximate solutions for a large range of regularization parameters, we can extract both parameters from the residual given only one set of noisy data and the forward operator. The smoothness in the residual allows to revisit parameter choice rules and relate a-priori, a-posteriori, and heuristic rules in a novel way that blurs the lines between the classical division of the parameter choice rules. All results are accompanied by numerical experiments.
265 - Congpei An , Hao-Ning Wu 2020
This paper is concerned with the introduction of Tikhonov regularization into least squares approximation scheme on $[-1,1]$ by orthonormal polynomials, in order to handle noisy data. This scheme includes interpolation and hyperinterpolation as speci al cases. With Gauss quadrature points employed as nodes, coefficients of the approximation polynomial with respect to given basis are derived in an entry-wise closed form. Under interpolatory conditions, the solution to the regularized approximation problem is rewritten in forms of two kinds of barycentric interpolation formulae, by introducing only a multiplicative correction factor into both classical barycentric formulae. An $L_2$ error bound and a uniform error bound are derived, providing similar information that Tikhonov regularization is able to reduce the operator norm (Lebesgue constant) and the error term related to the level of noise, both by multiplying a correction factor which is less than one. Numerical examples show the benefits of Tikhonov regularization when data is noisy or data size is relatively small.
Most of the literature on the solution of linear ill-posed operator equations, or their discretization, focuses only on the infinite-dimensional setting or only on the solution of the algebraic linear system of equations obtained by discretization. T his paper discusses the influence of the discretization error on the computed solution. We consider the situation when the discretization used yields an algebraic linear system of equations with a large matrix. An approximate solution of this system is computed by first determining a reduced system of fairly small size by carrying out a few steps of the Arnoldi process. Tikhonov regularization is applied to the reduced problem and the regularization parameter is determined by the discrepancy principle. Errors incurred in each step of the solution process are discussed. Computed examples illustrate the error bounds derived.
We propose regularization strategies for learning discriminative models that are robust to in-class variations of the input data. We use the Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images, and define a corresponding input-dependent additive noise data augmentation model. Expanding and integrating the augmented loss yields an effective Tikhonov-type Wasserstein diffusion smoothness regularizer. This approach allows us to apply high levels of regularization and train functions that have low variability within classes but remain flexible across classes. We provide efficient methods for computing the regularizer at a negligible cost in comparison to training with adversarial data augmentation. Initial experiments demonstrate improvements in generalization performance under adversarial perturbations and also large in-class variations of the input data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا