ترغب بنشر مسار تعليمي؟ اضغط هنا

Widely discredited ideas nevertheless persist. Why do people fail to ``unlearn? We study one explanation: beliefs are resistant to retractions (the revoking of earlier information). Our experimental design identifies unlearning -- i.e., updating from retractions -- and enables its comparison with learning from equivalent new information. Across different kinds of retractions -- for instance, those consistent or contradictory with the prior, or those occurring when prior beliefs are either extreme or moderate -- subjects do not fully unlearn from retractions and update less from them than from equivalent new information. This phenomenon is not explained by most of the well-studied violations of Bayesian updating, which yield differing predictions in our design. However, it is consistent with difficulties in conditional reasoning, which have been documented in other domains and circumstances.
The coupling of large telescopes to astronomical instruments has historically been challenging due to the tension between instrument throughput and stability. Light from the telescope can either be injected wholesale into the instrument, maintaining high throughput at the cost of point-spread function (PSF) stability, or the time-varying components of the light can be filtered out with single-mode fibers (SMFs), maintaining instrument stability at the cost of light loss. Today, the field of astrophotonics provides a potential resolution to the throughput-stability tension in the form of the photonic lantern (PL): a tapered waveguide which can couple a time-varying and aberrated PSF into multiple diffraction-limited beams at an efficiency that greatly surpasses direct SMF injection. As a result, lantern-fed instruments retain the stability of SMF-fed instruments while increasing their throughput. To this end, we present a series of numerical simulations characterizing PL performance as a function of lantern geometry, wavelength, and wavefront error (WFE), aimed at guiding the design of future diffraction-limited spectrometers. These characterizations include a first look at the interaction between PLs and phase-induced amplitude apodization (PIAA) optics.
59 - Jonathan Libgober 2021
After observing the outcome of a Blackwell experiment, a Bayesian decisionmaker can form (a) posterior beliefs over the state, as well as (b) posterior beliefs she would observe any given signal (assuming an independent draw from the same experiment) . I call the latter her contingent hypothetical beliefs. I show geometrically how contingent hypothetical beliefs relate to information structures. Specifically, the information structure can (generically) be derived by regressing contingent hypothetical beliefs on posterior beliefs over the state. Her prior is the unit eigenvector of a matrix determined from her posterior beliefs over the state and her contingent hypothetical beliefs. Thus, all aspects of a decisionmakers information acquisition problem can be determined using ex-post data (i.e., beliefs after having received signals). I compare my results to similar ones obtained in cases where information is modeled deterministically; the focus on single-agent stochastic information distinguishes my work.
Suspended microparticles subjected to AC electrical fields collectively organize into band patterns perpendicular to the field direction. The bands further develop into zigzag shaped patterns, in which the particles are observed to circulate. We demo nstrate that this phenomenon can be observed quite generically by generating such patterns with a wide range of particles: silica spheres, fatty acid, oil, and coacervate droplets, bacteria, and ground coffee. We show that the phenomenon can be well understood in terms of second order electrokinetic flow, which correctly predicts the hydrodynamic interactions required for the pattern formation process.Brownian particle simulations based on these interactions accurately recapitulate all of the observed pattern formation and symmetry-breaking events, starting from a homogeneous particle suspension. The emergence of the formed patterns can be predicted quantitatively within a parameter-free theory.
Non-equilibrium conditions must have been crucial for the assembly of the first informational polymers of early life, but supporting their formation and continuous enrichment in a long-lasting environment. Here, we explore how gas bubbles in water su bjected to a thermal gradient, a likely scenario within crustal mafic rocks on the early Earth, drive a complex, continuous enrichment of prebiotic molecules. NRA precursors, monomers, active ribozymes, oligonucleotides and lipids are shown to (1) cycle between dry and wet states, enabling the central step of RNA phosphorylation, (2) accumulate at the gas-water interface to drastically increase ribozymatic activity, (3) condense into hydrogels, (4) form pure crystals and (5) encapsulate into protecting vesicle aggregates that subsequently undergo fission. These effects occur within less than 30 min. The findings unite, in one location, the physical conditions that were crucial for the chemical emergence of biopolymers. They suggest that heated microbubbles could have hosted the first cycles of molecular evolution.
Middle-echo, which covers one or a few corresponding points, is a specific type of 3D point cloud acquired by a multi-echo laser scanner. In this paper, we propose a novel approach for automatic segmentation of trees that leverages middle-echo inform ation from LiDAR point clouds. First, using a convolution classification method, the proposed type of point clouds reflected by the middle echoes are identified from all point clouds. The middle-echo point clouds are distinguished from the first and last echoes. Hence, the crown positions of the trees are quickly detected from the huge number of point clouds. Second, to accurately extract trees from all point clouds, we propose a 3D deep learning network, PointNLM, to semantically segment tree crowns. PointNLM captures the long-range relationship between the point clouds via a non-local branch and extracts high-level features via max-pooling applied to unordered points. The whole framework is evaluated using the Semantic 3D reduced-test set. The IoU of tree point cloud segmentation reached 0.864. In addition, the semantic segmentation network was tested using the Paris-Lille-3D dataset. The average IoU outperformed several other popular methods. The experimental results indicate that the proposed algorithm provides an excellent solution for vegetation segmentation from LiDAR point clouds.
In this paper, we address the hyperspectral image (HSI) classification task with a generative adversarial network and conditional random field (GAN-CRF) -based framework, which integrates a semi-supervised deep learning and a probabilistic graphical model, and make three contributions. First, we design four types of convolutional and transposed convolutional layers that consider the characteristics of HSIs to help with extracting discriminative features from limited numbers of labeled HSI samples. Second, we construct semi-supervised GANs to alleviate the shortage of training samples by adding labels to them and implicitly reconstructing real HSI data distribution through adversarial training. Third, we build dense conditional random fields (CRFs) on top of the random variables that are initialized to the softmax predictions of the trained GANs and are conditioned on HSIs to refine classification maps. This semi-supervised framework leverages the merits of discriminative and generative models through a game-theoretical approach. Moreover, even though we used very small numbers of labeled training HSI samples from the two most challenging and extensively studied datasets, the experimental results demonstrated that spectral-spatial GAN-CRF (SS-GAN-CRF) models achieved top-ranking accuracy for semi-supervised HSI classification.
This paper deals with the geometric multi-model fitting from noisy, unstructured point set data (e.g., laser scanned point clouds). We formulate multi-model fitting problem as a sequential decision making process. We then use a deep reinforcement lea rning algorithm to learn the optimal decisions towards the best fitting result. In this paper, we have compared our method against the state-of-the-art on simulated data. The results demonstrated that our approach significantly reduced the number of fitting iterations.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramo unt role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.
Geometric model fitting is a fundamental task in computer graphics and computer vision. However, most geometric model fitting methods are unable to fit an arbitrary geometric model (e.g. a surface with holes) to incomplete data, due to that the simil arity metrics used in these methods are unable to measure the rigid partial similarity between arbitrary models. This paper hence proposes a novel rigid geometric similarity metric, which is able to measure both the full similarity and the partial similarity between arbitrary geometric models. The proposed metric enables us to perform partial procedural geometric model fitting (PPGMF). The task of PPGMF is to search a procedural geometric model space for the model rigidly similar to a query of non-complete point set. Models in the procedural model space are generated according to a set of parametric modeling rules. A typical query is a point cloud. PPGMF is very useful as it can be used to fit arbitrary geometric models to non-complete (incomplete, over-complete or hybrid-complete) point cloud data. For example, most laser scanning data is non-complete due to occlusion. Our PPGMF method uses Markov chain Monte Carlo technique to optimize the proposed similarity metric over the model space. To accelerate the optimization process, the method also employs a novel coarse-to-fine model dividing strategy to reject dissimilar models in advance. Our method has been demonstrated on a variety of geometric models and non-complete data. Experimental results show that the PPGMF method based on the proposed metric is able to fit non-complete data, while the method based on other metrics is unable. It is also shown that our method can be accelerated by several times via early rejection.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا