ترغب بنشر مسار تعليمي؟ اضغط هنا

Recurrent-type Neural Networks for Real-time Short-term Prediction of Ship Motions in High Sea State

73   0   0.0 ( 0 )
 نشر من قبل Andrea Serani
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The prediction capability of recurrent-type neural networks is investigated for real-time short-term prediction (nowcasting) of ship motions in high sea state. Specifically, the performance of recurrent neural networks, long-short term memory, and gated recurrent units models are assessed and compared using a data set coming from computational fluid dynamics simulations of a self-propelled destroyer-type vessel in stern-quartering sea state 7. Time series of incident wave, ship motions, rudder angle, as well as immersion probes, are used as variables for a nowcasting problem. The objective is to obtain about 20 s ahead prediction. Overall, the three methods provide promising and comparable results.



قيم البحث

اقرأ أيضاً

Recommender systems objectives can be broadly characterized as modeling user preferences over short-or long-term time horizon. A large body of previous research studied long-term recommendation through dimensionality reduction techniques applied to t he historical user-item interactions. A recently introduced session-based recommendation setting highlighted the importance of modeling short-term user preferences. In this task, Recurrent Neural Networks (RNN) have shown to be successful at capturing the nuances of users interactions within a short time window. In this paper, we evaluate RNN-based models on both short-term and long-term recommendation tasks. Our experimental results suggest that RNNs are capable of predicting immediate as well as distant user interactions. We also find the best performing configuration to be a stacked RNN with layer normalization and tied item embeddings.
Recurrent neural networks (RNN) are at the core of modern automatic speech recognition (ASR) systems. In particular, long-short term memory (LSTM) recurrent neural networks have achieved state-of-the-art results in many speech recognition tasks, due to their efficient representation of long and short term dependencies in sequences of inter-dependent features. Nonetheless, internal dependencies within the element composing multidimensional features are weakly considered by traditional real-valued representations. We propose a novel quaternion long-short term memory (QLSTM) recurrent neural network that takes into account both the external relations between the features composing a sequence, and these internal latent structural dependencies with the quaternion algebra. QLSTMs are compared to LSTMs during a memory copy-task and a realistic application of speech recognition on the Wall Street Journal (WSJ) dataset. QLSTM reaches better performances during the two experiments with up to $2.8$ times less learning parameters, leading to a more expressive representation of the information.
We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data . This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate improved robustness with respect to various input perturbations, while maintaining state-of-the-art performance.
Reservoir computing systems, a class of recurrent neural networks, have recently been exploited for model-free, data-based prediction of the state evolution of a variety of chaotic dynamical systems. The prediction horizon demonstrated has been about half dozen Lyapunov time. Is it possible to significantly extend the prediction time beyond what has been achieved so far? We articulate a scheme incorporating time-dependent but sparse data inputs into reservoir computing and demonstrate that such rare updates of the actual state practically enable an arbitrarily long prediction horizon for a variety of chaotic systems. A physical understanding based on the theory of temporal synchronization is developed.
A stochastic approach is implemented to address the problem of a marine structure exposed to water wave impacts. The focus is on (i) the average frequency of wave impacts, and (ii) the related probability distribution of impact kinematic variables. T he wave field is assumed to be Gaussian. The seakeeping motions of the considered body are taken into account in the analysis. The coupling of the stochastic model with a water entry model is demonstrated through the case study of a foil exposed to wave impacts.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا