Do you want to publish a course? Click here

Mapping Leaf Area Index with a Smartphone and Gaussian Processes

126   0   0.0 ( 0 )
 Added by Gustau Camps-Valls
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Leaf area index (LAI) is a key biophysical parameter used to determine foliage cover and crop growth in environmental studies. Smartphones are nowadays ubiquitous sensor devices with high computational power, moderate cost, and high-quality sensors. A smartphone app, called PocketLAI, was recently presented and tested for acquiring ground LAI estimates. In this letter, we explore the use of state-of-the-art nonlinear Gaussian process regression (GPR) to derive spatially explicit LAI estimates over rice using ground data from PocketLAI and Landsat 8 imagery. GPR has gained popularity in recent years because of their solid Bayesian foundations that offers not only high accuracy but also confidence intervals for the retrievals. We show the first LAI maps obtained with ground data from a smartphone combined with advanced machine learning. This work compares LAI predictions and confidence intervals of the retrievals obtained with PocketLAI to those obtained with classical instruments, such as digital hemispheric photography (DHP) and LI-COR LAI-2000. This letter shows that all three instruments got comparable result but the PocketLAI is far cheaper. The proposed methodology hence opens a wide range of possible applications at moderate cost.

rate research

Read More

Earth observation from satellite sensory data poses challenging problems, where machine learning is currently a key player. In recent years, Gaussian Process (GP) regression has excelled in biophysical parameter estimation tasks from airborne and satellite observations. GP regression is based on solid Bayesian statistics and generally yields efficient and accurate parameter estimates. However, GPs are typically used for inverse modeling based on concurrent observations and in situ measurements only. Very often a forward model encoding the well-understood physical relations between the state vector and the radiance observations is available though and could be useful to improve predictions and understanding. In this work, we review three GP models that respect and learn the physics of the underlying processes in the context of both forward and inverse modeling. After reviewing the traditional application of GPs for parameter retrieval, we introduce a Joint GP (JGP) model that combines in situ measurements and simulated data in a single GP model. Then, we present a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical constraints of the system governing equations. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Finally, we present an Automatic Gaussian Process Emulator (AGAPE) that approximates the forward physical model using concepts from Bayesian optimization and at the same time builds an optimally compact look-up-table for inversion. We give empirical evidence of the performance of these models through illustrative examples of vegetation monitoring and atmospheric modeling.
We consider the problem of optimizing a vector-valued objective function $boldsymbol{f}$ sampled from a Gaussian Process (GP) whose index set is a well-behaved, compact metric space $({cal X},d)$ of designs. We assume that $boldsymbol{f}$ is not known beforehand and that evaluating $boldsymbol{f}$ at design $x$ results in a noisy observation of $boldsymbol{f}(x)$. Since identifying the Pareto optimal designs via exhaustive search is infeasible when the cardinality of ${cal X}$ is large, we propose an algorithm, called Adaptive $boldsymbol{epsilon}$-PAL, that exploits the smoothness of the GP-sampled function and the structure of $({cal X},d)$ to learn fast. In essence, Adaptive $boldsymbol{epsilon}$-PAL employs a tree-based adaptive discretization technique to identify an $boldsymbol{epsilon}$-accurate Pareto set of designs in as few evaluations as possible. We provide both information-type and metric dimension-type bounds on the sample complexity of $boldsymbol{epsilon}$-accurate Pareto set identification. We also experimentally show that our algorithm outperforms other Pareto set identification methods on several benchmark datasets.
Earth observation (EO) by airborne and satellite remote sensing and in-situ observations play a fundamental role in monitoring our planet. In the last decade, machine learning and Gaussian processes (GPs) in particular has attained outstanding results in the estimation of bio-geo-physical variables from the acquired images at local and global scales in a time-resolved manner. GPs provide not only accurate estimates but also principled uncertainty estimates for the predictions, can easily accommodate multimodal data coming from different sensors and from multitemporal acquisitions, allow the introduction of physical knowledge, and a formal treatment of uncertainty quantification and error propagation. Despite great advances in forward and inverse modelling, GP models still have to face important challenges that are revised in this perspective paper. GP models should evolve towards data-driven physics-aware models that respect signal characteristics, be consistent with elementary laws of physics, and move from pure regression to observational causal inference.
In indoor positioning, signal fluctuation is highly location-dependent. However, signal uncertainty is one critical yet commonly overlooked dimension of the radio signal to be fingerprinted. This paper reviews the commonly used Gaussian Processes (GP) for probabilistic positioning and points out the pitfall of using GP to model signal fingerprint uncertainty. This paper also proposes Deep Gaussian Processes (DGP) as a more informative alternative to address the issue. How DGP better measures uncertainty in signal fingerprinting is evaluated via simulated and realistically collected datasets.
Linear models are regularly used for mapping cores to tiles in a chip. System-on-Chip (SoC) design requires integration of functional units with varying sizes, but conventional models only account for identical-sized cores. Linear models cannot calculate the varying areas of cores in SoCs directly and must rely on approximations. We propose using non-linear models: Semi-definite programming (SDP) allows easy model definitions and achieves approximately 20% reduced area and up to 80% reduced white space. As computational time is similar to linear models, they can be applied, practically.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا