ترغب بنشر مسار تعليمي؟ اضغط هنا

Automatic 3D neuron reconstruction is critical for analysing the morphology and functionality of neurons in brain circuit activities. However, the performance of existing tracing algorithms is hinged by the low image quality. Recently, a series of de ep learning based segmentation methods have been proposed to improve the quality of raw 3D optical image stacks by removing noises and restoring neuronal structures from low-contrast background. Due to the variety of neuron morphology and the lack of large neuron datasets, most of current neuron segmentation models rely on introducing complex and specially-designed submodules to a base architecture with the aim of encoding better feature representations. Though successful, extra burden would be put on computation during inference. Therefore, rather than modifying the base network, we shift our focus to the dataset itself. The encoder-decoder backbone used in most neuron segmentation models attends only intra-volume voxel points to learn structural features of neurons but neglect the shared intrinsic semantic features of voxels belonging to the same category among different volumes, which is also important for expressive representation learning. Hence, to better utilise the scarce dataset, we propose to explicitly exploit such intrinsic features of voxels through a novel voxel-level cross-volume representation learning paradigm on the basis of an encoder-decoder segmentation model. Our method introduces no extra cost during inference. Evaluated on 42 3D neuron images from BigNeuron project, our proposed method is demonstrated to improve the learning ability of the original segmentation model and further enhancing the reconstruction performance.
Scene understanding is a critical problem in computer vision. In this paper, we propose a 3D point-based scene graph generation ($mathbf{SGG_{point}}$) framework to effectively bridge perception and reasoning to achieve scene understanding via three sequential stages, namely scene graph construction, reasoning, and inference. Within the reasoning stage, an EDGE-oriented Graph Convolutional Network ($texttt{EdgeGCN}$) is created to exploit multi-dimensional edge features for explicit relationship modeling, together with the exploration of two associated twinning interaction mechanisms between nodes and edges for the independent evolution of scene graph representations. Overall, our integrated $mathbf{SGG_{point}}$ framework is established to seek and infer scene structures of interest from both real-world and synthetic 3D point-based scenes. Our experimental results show promising edge-oriented reasoning effects on scene graph generation studies. We also demonstrate our method advantage on several traditional graph representation learning benchmark datasets, including the node-wise classification on citation networks and whole-graph recognition problems for molecular analysis.
Micro-expressions are reflections of peoples true feelings and motives, which attract an increasing number of researchers into the study of automatic facial micro-expression recognition. The short detection window, the subtle facial muscle movements, and the limited training samples make micro-expression recognition challenging. To this end, we propose a novel Identity-aware and Capsule-Enhanced Generative Adversarial Network with graph-based reasoning (ICE-GAN), introducing micro-expression synthesis as an auxiliary task to assist recognition. The generator produces synthetic faces with controllable micro-expressions and identity-aware features, whose long-ranged dependencies are captured through the graph reasoning module (GRM), and the discriminator detects the image authenticity and expression classes. Our ICE-GAN was evaluated on Micro-Expression Grand Challenge 2019 (MEGC2019) with a significant improvement (12.9%) over the winner and surpassed other state-of-the-art methods.
In the work, the thermal and vacuum fluctuation is predicted capable of generating a Casimir thrust force on a rotating chiral particle, which will push or pull the particle along the rotation axis. The Casimir thrust force comes from two origins: i) the rotation-induced symmetry-breaking in the vacuum and thermal fluctuation and ii) the chiral cross-coupling between electric and magnetic fields and dipoles, which can convert the vacuum spin angular momentum (SAM) to the vacuum force. Using the fluctuation dissipation theorem (FDT), we derive the analytical expressions for the vacuum thrust force in dipolar approximation and the dependences of the force on rotation frequency, temperature and material optical properties are investigated. The work reveals a new mechanism to generate a vacuum force, which opens a new way to exploit zero-point energy of vacuum.
There are two different proposals for the momentum of light in a transparent dielectric of refractive index n: Minkowskis version nE/c and Abrahms version E/(nc), where E and c are the energy and vacuum speed of light, respectively. Despite many test s and debates over nearly a century, momentum of light in a transparent dielectric remains controversial. In this Letter, we report a direct observation of the inward push force on the end face of a free nm fiber taper exerted by the outgoing light. Our results clearly support Abraham momentum. Our experiment also indicates an inward surface pressure on a dielectric exerted by the incident light, different from the commonly recognized pressure due to the specular reflection. Such an inward surface pressure by the incident light may be useful for precise design of the laser-induced inertially-confined fusion.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا