ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine learning-based method for linearization and error compensation of an absolute rotary encoder

144   0   0.0 ( 0 )
 نشر من قبل Lorenzo Iafolla Mr.
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The main objective of this work is to develop a miniaturized, high accuracy, single-turn absolute, rotary encoder called ASTRAS360. Its measurement principle is based on capturing an image that uniquely identifies the rotation angle. To evaluate this angle, the image first has to be classified into its sector based on its color, and only then can the angle be regressed. In-spired by machine learning, we built a calibration setup, able to generate labeled training data automatically. We used these training data to test, characterize, and compare several machine learning algorithms for the classification and the regression. In an additional experiment, we also characterized the tolerance of our rotary encoder to eccentric mounting. Our findings demonstrate that various algorithms can perform these tasks with high accuracy and reliability; furthermore, providing extra-inputs (e.g. rotation direction) allows the machine learning algorithms to compensate for the mechanical imperfections of the rotary encoder.



قيم البحث

اقرأ أيضاً

Large-scale integration of converter-based renewable energy sources (RESs) into the power system will lead to a higher risk of frequency nadir limit violation and even frequency instability after the large power disturbance. Therefore, it is essentia l to consider the frequency nadir constraint (FNC) in power system scheduling. Nevertheless, the FNC is highly nonlinear and non-convex. The state-of-the-art method to simplify the constraint is to construct a low-order frequency response model at first, and then linearize the frequency nadir equation. In this letter, an extreme learning machine (ELM)-based network is built to de-rive the linear formulation of FNC, where the two-step fitting process is integrated into one training process and more details about the physical model of the generator are considered to reduce the fitting error. Simulation results show the superiority of the proposed method on the fitting accuracy.
In this paper, we propose a model-based machine-learning approach for dual-polarization systems by parameterizing the split-step Fourier method for the Manakov-PMD equation. The resulting method combines hardware-friendly time-domain nonlinearity mit igation via the recently proposed learned digital backpropagation (LDBP) with distributed compensation of polarization-mode dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train LDBP-PMD on multiple PMD realizations and show that it converges within 1% of its peak dB performance after 428 training iterations on average, yielding a peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case. Similar to state-of-the-art lumped PMD compensation algorithms in practical systems, our approach does not assume any knowledge about the particular PMD realization along the link, nor any knowledge about the total accumulated PMD. This is a significant improvement compared to prior work on distributed PMD compensation, where knowledge about the accumulated PMD is typically assumed. We also compare different parameterization choices in terms of performance, complexity, and convergence behavior. Lastly, we demonstrate that the learned models can be successfully retrained after an abrupt change of the PMD realization along the fiber.
Efficient nonlinearity compensation in fiber-optic communication systems is considered a key element to go beyond the capacity crunch. One guiding principle for previous work on the design of practical nonlinearity compensation schemes is that fewer steps lead to better systems. In this paper, we challenge this assumption and show how to carefully design multi-step approaches that provide better performance--complexity trade-offs than their few-step counterparts. We consider the recently proposed learned digital backpropagation (LDBP) approach, where the linear steps in the split-step method are re-interpreted as general linear functions, similar to the weight matrices in a deep neural network. Our main contribution lies in an experimental demonstration of this approach for a 25 Gbaud single-channel optical transmission system. It is shown how LDBP can be integrated into a coherent receiver DSP chain and successfully trained in the presence of various hardware impairments. Our results show that LDBP with limited complexity can achieve better performance than standard DBP by using very short, but jointly optimized, finite-impulse response filters in each step. This paper also provides an overview of recently proposed extensions of LDBP and we comment on potentially interesting avenues for future work.
We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventio nal deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
In the quest to realize a comprehensive EEG signal processing framework, in this paper, we demonstrate a toolbox and graphic user interface, EEGsig, for the full process of EEG signals. Our goal is to provide a comprehensive suite, free and open-sour ce framework for EEG signal processing where the users especially physicians who do not have programming experience can focus on their practical requirements to speed up the medical projects. Developed on MATLAB software, we have aggregated all the three EEG signal processing steps, including preprocessing, feature extraction, and classification into EEGsig. In addition to a varied list of useful features, in EEGsig, we have implemented three popular classification algorithms (K-NN, SVM, and ANN) to assess the performance of the features. Our experimental results demonstrate that our novel framework for EEG signal processing attained excellent classification results and feature extraction robustness under different machine learning classifier algorithms. Besides, in EEGsig, for selecting the best feature extracted, all EEG signal channels can be visible simultaneously; thus, the effect of each task on the signal can be visible. We believe that our user-centered MATLAB package is an encouraging platform for novice users as well as offering the highest level of control to expert users
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا