ترغب بنشر مسار تعليمي؟ اضغط هنا

Shadowing the rotating annulus. Part II: Gradient descent in the perfect model scenario

129   0   0.0 ( 0 )
 نشر من قبل Roland Young
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Shadowing trajectories are model trajectories consistent with a sequence of observations of a system, given a distribution of observational noise. The existence of such trajectories is a desirable property of any forecast model. Gradient descent of indeterminism is a well-established technique for finding shadowing trajectories in low-dimensional analytical systems. Here we apply it to the thermally-driven rotating annulus, a laboratory experiment intermediate in model complexity and physical idealisation between analytical systems and global, comprehensive atmospheric models. We work in the perfect model scenario using the MORALS model to generate a sequence of noisy observations in a chaotic flow regime. We demonstrate that the gradient descent technique recovers a pseudo-orbit of model states significantly closer to a model trajectory than the initial sequence. Gradient-free descent is used, where the adjoint model is set to $lambda$I in the absence of a full adjoint model. The indeterminism of the pseudo-orbit falls by two orders of magnitude during the descent, but we find that the distance between the pseudo-orbit and the initial, true, model trajectory reaches a minimum and then diverges from truth. We attribute this to the use of the $lambda$-adjoint, which is well suited to noise reduction but not to finely-tuned convergence towards a model trajectory. We find that $lambda=0.25$ gives optimal results, and that candidate model trajectories begun from this pseudo-orbit shadow the observations for up to 80 s, about the length of the longest timescale of the system, and similar to expected shadowing times based on the distance between the pseudo-orbit and the truth. There is great potential for using this method with real laboratory data.



قيم البحث

اقرأ أيضاً

An intuitively necessary requirement of models used to provide forecasts of a systems future is the existence of shadowing trajectories that are consistent with past observations of the system: given a system-model pair, do model trajectories exist t hat stay reasonably close to a sequence of observations of the system? Techniques for finding such trajectories are well-understood in low-dimensional systems, but there is significant interest in their application to high-dimensional weather and climate models. We build on work by Smith et al. [2010, Phys. Lett. A, 374, 2618-2623] and develop a method for measuring the time that individual candidate trajectories of high-dimensional models shadow observations, using a model of the thermally-driven rotating annulus in the perfect model scenario. Models of the annulus are intermediate in complexity between low-dimensional systems and global atmospheric models. We demonstrate our method by measuring shadowing times against artificially-generated observations for candidate trajectories beginning a fixed distance from truth in one of the annulus chaotic flow regimes. The distribution of candidate shadowing times we calculated using our method corresponds closely to (1) the range of times over which the trajectories visually diverge from the observations and (2) the divergence time using a simple metric based on the distance between model trajectory and observations. An empirical relationship between the expected candidate shadowing times and the initial distance from truth confirms that the method behaves reasonably as parameters are varied.
Scaling regions -- intervals on a graph where the dependent variable depends linearly on the independent variable -- abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these ap plications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot -- e.g., the values of the correlation integral, calculated using the Grassberger-Procaccia algorithm over a range of scales -- we create an ensemble of intervals by considering all possible combinations of endpoints, generating a distribution of slopes from least-squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The endpoints of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems, and show that it can be useful in selecting values for the parameters in time-delay reconstructions.
Recent studies demonstrate that trends in indicators extracted from measured time series can indicate approaching to an impending transition. Kendalls {tau} coefficient is often used to study the trend of statistics related to the critical slowing do wn phenomenon and other methods to forecast critical transitions. Because statistics are estimated from time series, the values of Kendalls {tau} are affected by parameters such as window size, sample rate and length of the time series, resulting in challenges and uncertainties in interpreting results. In this study, we examine the effects of different parameters on the distribution of the trend obtained from Kendalls {tau}, and provide insights into how to choose these parameters. We also suggest the use of the non-parametric Mann-Kendall test to evaluate the significance of a Kendalls {tau} value. The non-parametric test is computationally much faster compared to the traditional parametric ARMA test.
Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. Th ese are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher Information Matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the Likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm provides a tractable alternative. We finally apply our methods to a large-scale, benchmark Systems Biology model of NF-$kappa$B, uncovering previously unknown unidentifiabilities.
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizers hyperparameters, such as the learning rate. There exist many techniques for automated hyperparameter optimization, but they typically introd uce even more hyperparameters to control the hyperparameter optimization process. We propose to instead learn the hyperparameters themselves by gradient descent, and furthermore to learn the hyper-hyperparameters by gradient descent as well, and so on ad infinitum. As these towers of gradient-based optimizers grow, they become significantly less sensitive to the choice of top-level hyperparameters, hence decreasing the burden on the user to search for optimal values.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا