Do you want to publish a course? Click here

RockGPT: Reconstructing three-dimensional digital rocks from single two-dimensional slice from the perspective of video generation

109   0   0.0 ( 0 )
 Added by Dongxiao Zhang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Random reconstruction of three-dimensional (3D) digital rocks from two-dimensional (2D) slices is crucial for elucidating the microstructure of rocks and its effects on pore-scale flow in terms of numerical modeling, since massive samples are usually required to handle intrinsic uncertainties. Despite remarkable advances achieved by traditional process-based methods, statistical approaches and recently famous deep learning-based models, few works have focused on producing several kinds of rocks with one trained model and allowing the reconstructed samples to satisfy certain given properties, such as porosity. To fill this gap, we propose a new framework, named RockGPT, which is composed of VQ-VAE and conditional GPT, to synthesize 3D samples based on a single 2D slice from the perspective of video generation. The VQ-VAE is utilized to compress high-dimensional input video, i.e., the sequence of continuous rock slices, to discrete latent codes and reconstruct them. In order to obtain diverse reconstructions, the discrete latent codes are modeled using conditional GPT in an autoregressive manner, while incorporating conditional information from a given slice, rock type, and porosity. We conduct two experiments on five kinds of rocks, and the results demonstrate that RockGPT can produce different kinds of rocks with the same model, and the reconstructed samples can successfully meet certain specified porosities. In a broader sense, through leveraging the proposed conditioning scheme, RockGPT constitutes an effective way to build a general model to produce multiple kinds of rocks simultaneously that also satisfy user-defined properties.



rate research

Read More

Star formation has long been known to be an inefficient process, in the sense that only a small fraction $epsilon_{rm ff}$ of the mass of any given gas cloud is converted to stars per cloud free-fall time. However, developing a successful theory of star formation will require measurements of both the mean value of $epsilon_{rm ff}$ and its scatter from one molecular cloud to another. Because $epsilon_{rm ff}$ is measured relative to the free-fall time, such measurements require accurate determinations of cloud volume densities. Efforts to measure the volume density from two-dimensional projected data, however, have thus far relied on treating molecular clouds as simple uniform spheres, while their real shapes are likely filamentary and their density distributions far from uniform. The resulting uncertainty in the true volume density is likely one of the major sources of error in observational estimates of $epsilon_{rm ff}$. In this paper, we use a suite of simulations of turbulent, magnetized, radiative, self-gravitating star-forming clouds to examine whether it is possible to obtain more accurate volume density estimates and thereby reduce this error. We create mock observations from simulations, and show that current analysis methods relying on the spherical assumption likely yield ~ 0.26 dex underestimations and ~ 0.51 dex errors in volume density estimates, corresponding to a ~ 0.13 dex overestimation and a ~ 0.25 dex scatter in $epsilon_{rm ff}$, comparable to the scatter in observed cloud samples. We build a predictive model that uses information accessible in two-dimensional measurements -- most significantly the Gini coefficient of the surface density distribution -- to estimate volume density with ~ 0.3 dex less scatter. We test our method on a recent observation of the Ophiuchus cloud, and show that it successfully reduces the $epsilon_{rm ff}$ scatter.
Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem. Though achieving excellent performance, these methods usually neglect an important fact that text in images are actually distributed in two-dimensional space. It is a nature quite different from that of speech, which is essentially a one-dimensional signal. In principle, directly compressing features of text into a one-dimensional form may lose useful information and introduce extra noise. In this paper, we approach scene text recognition from a two-dimensional perspective. A simple yet effective model, called Character Attention Fully Convolutional Network (CA-FCN), is devised for recognizing the text of arbitrary shapes. Scene text recognition is realized with a semantic segmentation network, where an attention mechanism for characters is adopted. Combined with a word formation module, CA-FCN can simultaneously recognize the script and predict the position of each character. Experiments demonstrate that the proposed algorithm outperforms previous methods on both regular and irregular text datasets. Moreover, it is proven to be more robust to imprecise localizations in the text detection phase, which are very common in practice.
Passive non-line-of-sight imaging methods are often faster and stealthier than their active counterparts, requiring less complex and costly equipment. However, many of these methods exploit motion of an occluder or the hidden scene, or require knowledge or calibration of complicated occluders. The edge of a wall is a known and ubiquitous occluding structure that may be used as an aperture to image the region hidden behind it. Light from around the corner is cast onto the floor forming a fan-like penumbra rather than a sharp shadow. Subtle variations in the penumbra contain a remarkable amount of information about the hidden scene. Previous work has leveraged the vertical nature of the edge to demonstrate 1D (in angle measured around the corner) reconstructions of moving and stationary hidden scenery from as little as a single photograph of the penumbra. In this work, we introduce a second reconstruction dimension: range measured from the edge. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. Performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility (and utility) of the 2D corner camera.
The continuous ferromagnetic-paramagnetic phase transition in the two-dimensional Ising model has already been excessively studied by conventional canonical statistical analysis in the past. We use the recently developed generalized microcanonical inflection-point analysis method to investigate the least-sensitive inflection points of the microcanonical entropy and its derivatives to identify transition signals. Surprisingly, this method reveals that there are potentially two additional transitions for the Ising system besides the critical transition.
370 - Liming Wu , Shuo Han , Alain Chen 2021
Robust and accurate nuclei centroid detection is important for the understanding of biological structures in fluorescence microscopy images. Existing automated nuclei localization methods face three main challenges: (1) Most of object detection methods work only on 2D images and are difficult to extend to 3D volumes; (2) Segmentation-based models can be used on 3D volumes but it is computational expensive for large microscopy volumes and they have difficulty distinguishing different instances of objects; (3) Hand annotated ground truth is limited for 3D microscopy volumes. To address these issues, we present a scalable approach for nuclei centroid detection of 3D microscopy volumes. We describe the RCNN-SliceNet to detect 2D nuclei centroids for each slice of the volume from different directions and 3D agglomerative hierarchical clustering (AHC) is used to estimate the 3D centroids of nuclei in a volume. The model was trained with the synthetic microscopy data generated using Spatially Constrained Cycle-Consistent Adversarial Networks (SpCycleGAN) and tested on different types of real 3D microscopy data. Extensive experimental results demonstrate that our proposed method can accurately count and detect the nuclei centroids in a 3D microscopy volume.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا