Do you want to publish a course? Click here

On statistical approaches to generate Level 3 products from satellite remote sensing retrievals

83   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Satellite remote sensing of trace gases such as carbon dioxide (CO$_2$) has increased our ability to observe and understand Earths climate. However, these remote sensing data, specifically~Level 2 retrievals, tend to be irregular in space and time, and hence, spatio-temporal prediction is required to infer values at any location and time point. Such inferences are not only required to answer important questions about our climate, but they are also needed for validating the satellite instrument, since Level 2 retrievals are generally not co-located with ground-based remote sensing instruments. Here, we discuss statistical approaches to construct Level 3 products from Level 2 retrievals, placing particular emphasis on the strengths and potential pitfalls when using statistical prediction in this context. Following this discussion, we use a spatio-temporal statistical modelling framework known as fixed rank kriging (FRK) to obtain global predictions and prediction standard errors of column-averaged carbon dioxide based on Version 7r and Version 8r retrievals from the Orbiting Carbon Observatory-2 (OCO-2) satellite. The FRK predictions allow us to validate statistically the Level 2 retrievals globally even though the data are at locations and at time points that do not coincide with validation data. Importantly, the validation takes into account the prediction uncertainty, which is dependent both on the temporally-varying density of observations around the ground-based measurement sites and on the spatio-temporal high-frequency components of the trace gas field that are not explicitly modelled. Here, for validation of remotely-sensed CO$_2$ data, we use observations from the Total Carbon Column Observing Network. We demonstrate that the resulting FRK product based on Version 8r compares better with TCCON data than that based on Version 7r.



rate research

Read More

This paper introduces a modular processing chain to derive global high-resolution maps of leaf traits. In particular, we present global maps at 500 m resolution of specific leaf area, leaf dry matter content, leaf nitrogen and phosphorus content per dry mass, and leaf nitrogen/phosphorus ratio. The processing chain exploits machine learning techniques along with optical remote sensing data (MODIS/Landsat) and climate data for gap filling and up-scaling of in-situ measured leaf traits. The chain first uses random forests regression with surrogates to fill gaps in the database ($> 45 % $ of missing entries) and maximize the global representativeness of the trait dataset. Along with the estimated global maps of leaf traits, we provide associated uncertainty estimates derived from the regression models. The process chain is modular, and can easily accommodate new traits, data streams (traits databases and remote sensing data), and methods. The machine learning techniques applied allow attribution of information gain to data input and thus provide the opportunity to understand trait-environment relationships at the plant and ecosystem scales.
176 - D. Sornette 2008
This entry in the Encyclopedia of Complexity and Systems Science, Springer present a summary of some of the concepts and calculational tools that have been developed in attempts to apply statistical physics approaches to seismology. We summarize the leading theoretical physical models of the space-time organization of earthquakes. We present a general discussion and several examples of the new metrics proposed by statistical physicists, underlining their strengths and weaknesses. The entry concludes by briefly outlining future directions. The presentation is organized as follows. I Glossary II Definition and Importance of the Subject III Introduction IV Concepts and Calculational Tools IV.1 Renormalization, Scaling and the Role of Small Earthquakes in Models of Triggered Seismicity IV.2 Universality IV.3 Intermittent Periodicity and Chaos IV.4 Turbulence IV.5 Self-Organized Criticality V Competing mechanisms and models V.1 Roots of complexity in seismicity: dynamics or heterogeneity? V.2 Critical earthquakes V.3 Spinodal decomposition V.4 Dynamics, stress interaction and thermal fluctuation effects VI Empirical studies of seismicity inspired by statistical physics VI.1 Early successes and latter subsequent challenges VI.2 Entropy method for the distribution of time intervals between mainshocks VI.3 Scaling of the PDF of Waiting Times VI.4 Scaling of the PDF of Distances Between Subsequent Earthquakes VI.5 The Network Approach VII Future Directions
The rotational Doppler effect associated with lights orbital angular momentum (OAM) has been found as a powerful tool to detect rotating bodies. However, this method was only demonstrated experimentally on the laboratory scale under well controlled conditions so far. And its real potential lies at the practical applications in the field of remote sensing. We have established a 120-meter long free-space link between the rooftops of two buildings and show that both the rotation speed and the rotational symmetry of objects can be identified from the detected rotational Doppler frequency shift signal at photon count level. Effects of possible slight misalignments and atmospheric turbulences are quantitatively analyzed in terms of mode power spreading to the adjacent modes as well as the transfer of rotational frequency shifts. Moreover, our results demonstrate that with the preknowledge of the objects rotational symmetry one may always deduce the rotation speed no matter how strong the coupling to neighboring modes is. Without any information of the rotating object, the deduction of the objects symmetry and rotational speed may still be obtained as long as the mode spreading efficiency does not exceed 50 %. Our work supports the feasibility of a practical sensor to remotely detect both the speed and symmetry of rotating bodies.
72 - Yuxing Chen 2021
Our planet is viewed by satellites through multiple sensors (e.g., multi-spectral, Lidar and SAR) and at different times. Multi-view observations bring us complementary information than the single one. Alternatively, there are common features shared between different views, such as geometry and semantics. Recently, contrastive learning methods have been proposed for the alignment of multi-view remote sensing images and improving the feature representation of single sensor images by modeling view-invariant factors. However, these methods are based on the pretraining of the predefined tasks or just focus on image-level classification. Moreover, these methods lack research on uncertainty estimation. In this work, a pixel-wise contrastive approach based on an unlabeled multi-view setting is proposed to overcome this limitation. This is achieved by the use of contrastive loss in the feature alignment and uniformity between multi-view images. In this approach, a pseudo-Siamese ResUnet is trained to learn a representation that aims to align features from the shifted positive pairs and uniform the induced distribution of the features on the hypersphere. The learned features of multi-view remote sensing images are evaluated on a liner protocol evaluation and an unsupervised change detection task. We analyze key properties of the approach that make it work, finding that the requirement of shift equivariance ensured the success of the proposed approach and the uncertainty estimation of representations leads to performance improvements. Moreover, the performance of multi-view contrastive learning is affected by the choice of different sensors. Results demonstrate both improvements in efficiency and accuracy over the state-of-the-art multi-view contrastive methods.
We show how two techniques from statistical physics can be adapted to solve a variant of the notorious Unique Games problem, potentially opening new avenues towards the Unique Games Conjecture. The variant, which we call Count Unique Games, is a promise problem in which the yes case guarantees a certain number of highly satisfiable assignments to the Unique Games instance. In the standard Unique Games problem, the yes case only guarantees at least one such assignment. We exhibit efficient algorithms for Count Unique Games based on approximating a suitable partition function for the Unique Games instance via (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion. We also show that a modest improvement to the parameters for which we give results would refute the Unique Games Conjecture.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا