ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Vision System for 3D Plant Phenotyping

52   0   0.0 ( 0 )
 نشر من قبل Ayan Chaudhury
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine vision for plant phenotyping is an emerging research area for producing high throughput in agriculture and crop science applications. Since 2D based approaches have their inherent limitations, 3D plant analysis is becoming state of the art for current phenotyping technologies. We present an automated system for analyzing plant growth in indoor conditions. A gantry robot system is used to perform scanning tasks in an automated manner throughout the lifetime of the plant. A 3D laser scanner mounted as the robots payload captures the surface point cloud data of the plant from multiple views. The plant is monitored from the vegetative to reproductive stages in light/dark cycles inside a controllable growth chamber. An efficient 3D reconstruction algorithm is used, by which multiple scans are aligned together to obtain a 3D mesh of the plant, followed by surface area and volume computations. The whole system, including the programmable growth chamber, robot, scanner, data transfer and analysis is fully automated in such a way that a naive user can, in theory, start the system with a mouse click and get back the growth analysis results at the end of the lifetime of the plant with no intermediate intervention. As evidence of its functionality, we show and analyze quantitative results of the rhythmic growth patterns of the dicot Arabidopsis thaliana(L.), and the monocot barley (Hordeum vulgare L.) plants under their diurnal light/dark cycles.

قيم البحث

اقرأ أيضاً

New imaging techniques are in great demand for investigating underground plant roots systems which play an important role in crop production. Compared with other non-destructive imaging modalities, PET can image plant roots in natural soil and produc e dynamic 3D functional images which reveal the temporal dynamics of plant-environment interactions. In this study, we combined PET with optical projection tomography (OPT) to evaluate its potential for plant root phenotyping. We used a dedicated high resolution plant PET imager that has a 14 cm transaxial and 10 cm axial field of views, and multi-bed imaging capability. The image resolution is around 1.25 mm using ML-EM reconstruction algorithm. B73 inbred maize seeds were germinated and then grown in a sealed jar with transparent gel-based media. PET scanning started on the day when the first green leaf appeared, and was carried out once a day for 5 days. Each morning, around 10 mCi of 11CO2 was administrated into a custom built plant labeling chamber. After 10 minutes, residual activity was flushed out with fresh air before a 2-h PET scan started. For the OPT imaging, the jar was placed inside an acrylic cubic container filled with water, illuminated with a uniform surface light source, and imaged by a DSLR camera from 72 angles to acquire optical images for OPT reconstruction. The same plant was imaged 3 times a day by the OPT system. Plant roots growth is measured from the optical images. Co-registered PET and optical images indicate that most of the hot spots appeared in later time points of the PET images correspond to the most actively growing root tips. The strong linear correlation between 11C allocation at root tips measured by PET and eventual root growth measured by OPT suggests that we can use PET as a phenotyping tool to measure how a plant makes subterranean carbon allocation decisions in different environmental scenarios.
Quantification of physiological changes in plants can capture different drought mechanisms and assist in selection of tolerant varieties in a high throughput manner. In this context, an accurate 3D model of plant canopy provides a reliable representa tion for drought stress characterization in contrast to using 2D images. In this paper, we propose a novel end-to-end pipeline including 3D reconstruction, segmentation and feature extraction, leveraging deep neural networks at various stages, for drought stress study. To overcome the high degree of self-similarities and self-occlusions in plant canopy, prior knowledge of leaf shape based on features from deep siamese network are used to construct an accurate 3D model using structure from motion on wheat plants. The drought stress is characterized with a deep network based feature aggregation. We compare the proposed methodology on several descriptors, and show that the network outperforms conventional methods.
This paper presents a vision system and a depth processing algorithm for DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to reliably capture 3D information of a scene and objects robust to challenging environment conditions. We a lso propose a depth-map upsampling method that produces an outliers-free depth map by explicitly handling depth outliers. Our system is suitable for an interactive robot with real-world that requires accurate object detection and pose estimation. We evaluate our depth processing algorithm over state-of-the-art algorithms on several synthetic and real-world datasets.
Standard 3D convolution operations require much larger amounts of memory and computation cost than 2D convolution operations. The fact has hindered the development of deep neural nets in many 3D vision tasks. In this paper, we investigate the possibi lity of applying depthwise separable convolutions in 3D scenario and introduce the use of 3D depthwise convolution. A 3D depthwise convolution splits a single standard 3D convolution into two separate steps, which would drastically reduce the number of parameters in 3D convolutions with more than one order of magnitude. We experiment with 3D depthwise convolution on popular CNN architectures and also compare it with a similar structure called pseudo-3D convolution. The results demonstrate that, with 3D depthwise convolutions, 3D vision tasks like classification and reconstruction can be carried out with more light-weighted neural networks while still delivering comparable performances.
The digital Michelangelo project was a seminal computer vision project in the early 2000s that pushed the capabilities of acquisition systems and involved multiple people from diverse fields, many of whom are now leaders in industry and academia. Rev iewing this project with modern eyes provides us with the opportunity to reflect on several issues, relevant now as then to the field of computer vision and research in general, that go beyond the technical aspects of the work. This article was written in the context of a reading group competition at the week-long International Computer Vision Summer School 2017 (ICVSS) on Sicily, Italy. To deepen the participants understanding of computer vision and to foster a sense of community, various reading groups were tasked to highlight important lessons which may be learned from provided literature, going beyond the contents of the paper. This report is the winning entry of this guided discourse (Fig. 1). The authors closely examined the origins, fruits and most importantly lessons about research in general which may be distilled from the digital Michelangelo project. Discussions leading to this report were held within the group as well as with Hao Li, the group mentor.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا