ترغب بنشر مسار تعليمي؟ اضغط هنا

Using vis-NIRS and Machine Learning methods to diagnose sugarcane soil chemical properties

53   0   0.0 ( 0 )
 نشر من قبل Diego A. Delgadillo-Duran
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Knowing chemical soil properties might be determinant in crop management and total yield production. Traditional soil properties estimation approaches are time-consuming and require complex lab setups, refraining farmers from promptly taking steps towards optimal practices in their crops. Soil properties estimation from its spectral signals, vis-NIRS, emerged as a low-cost, non-invasive, and non-destructive alternative. Current approaches use mathematical and statistical techniques, avoiding machine learning frameworks. This proposal uses vis-NIRS in sugarcane soils and machine learning techniques such as three regression and six classification methods. The scope is to assess performance in predicting and inferring categories of common soil properties (pH, soil organic matter OM, Ca, Na, K, and Mg), evaluated by the most common metrics. We use regression to estimate properties and classification to assess soil property status. In both cases, we achieved comparable performance on similar setups reported in the literature for property estimation for pH($R^2$=0.8, $rho$=0.89), OM($R^2$=0.37, $rho$=0.63), Ca($R^2$=0.54, $rho$=0.74), Mg($R^2$=0.44, $rho$=0.66) in the validation set.

قيم البحث

اقرأ أيضاً

The IoT vision of ubiquitous and pervasive computing gives rise to future smart irrigation systems comprising physical and digital world. Smart irrigation ecosystem combined with Machine Learning can provide solutions that successfully solve the soil humidity sensing task in order to ensure optimal water usage. Existing solutions are based on data received from the power hungry/expensive sensors that are transmitting the sensed data over the wireless channel. Over time, the systems become difficult to maintain, especially in remote areas due to the battery replacement issues with large number of devices. Therefore, a novel solution must provide an alternative, cost and energy effective device that has unique advantage over the existing solutions. This work explores a concept of a novel, low-power, LoRa-based, cost-effective system which achieves humidity sensing using Deep learning techniques that can be employed to sense soil humidity with the high accuracy simply by measuring signal strength of the given underground beacon device.
Emerging wireless technologies, such as 5G and beyond, are bringing new use cases to the forefront, one of the most prominent being machine learning empowered health care. One of the notable modern medical concerns that impose an immense worldwide he alth burden are respiratory infections. Since cough is an essential symptom of many respiratory infections, an automated system to screen for respiratory diseases based on raw cough data would have a multitude of beneficial research and medical applications. In literature, machine learning has already been successfully used to detect cough events in controlled environments. In this paper, we present a low complexity, automated recognition and diagnostic tool for screening respiratory infections that utilizes Convolutional Neural Networks (CNNs) to detect cough within environment audio and diagnose three potential illnesses (i.e., bronchitis, bronchiolitis and pertussis) based on their unique cough audio features. Both proposed detection and diagnosis models achieve an accuracy of over 89%, while also remaining computationally efficient. Results show that the proposed system is successfully able to detect and separate cough events from background noise. Moreover, the proposed single diagnosis model is capable of distinguishing between different illnesses without the need of separate models.
As machine learning becomes an important part of many real world applications affecting human lives, new requirements, besides high predictive accuracy, become important. One important requirement is transparency, which has been associated with model interpretability. Many machine learning algorithms induce models difficult to interpret, named black box. Moreover, people have difficulty to trust models that cannot be explained. In particular for machine learning, many groups are investigating new methods able to explain black box models. These methods usually look inside the black models to explain their inner work. By doing so, they allow the interpretation of the decision making process used by black box models. Among the recently proposed model interpretation methods, there is a group, named local estimators, which are designed to explain how the label of particular instance is predicted. For such, they induce interpretable models on the neighborhood of the instance to be explained. Local estimators have been successfully used to explain specific predictions. Although they provide some degree of model interpretability, it is still not clear what is the best way to implement and apply them. Open questions include: how to best define the neighborhood of an instance? How to control the trade-off between the accuracy of the interpretation method and its interpretability? How to make the obtained solution robust to small variations on the instance to be explained? To answer to these questions, we propose and investigate two strategies: (i) using data instance properties to provide improved explanations, and (ii) making sure that the neighborhood of an instance is properly defined by taking the geometry of the domain of the feature space into account. We evaluate these strategies in a regression task and present experimental results that show that they can improve local explanations.
Machine learning (ML) is increasingly being adopted in a wide variety of application domains. Usually, a well-performing ML model, especially, emerging deep neural network model, relies on a large volume of training data and high-powered computationa l resources. The need for a vast volume of available data raises serious privacy concerns because of the risk of leakage of highly privacy-sensitive information and the evolving regulatory environments that increasingly restrict access to and use of privacy-sensitive data. Furthermore, a trained ML model may also be vulnerable to adversarial attacks such as membership/property inference attacks and model inversion attacks. Hence, well-designed privacy-preserving ML (PPML) solutions are crucial and have attracted increasing research interest from academia and industry. More and more efforts of PPML are proposed via integrating privacy-preserving techniques into ML algorithms, fusing privacy-preserving approaches into ML pipeline, or designing various privacy-preserving architectures for existing ML systems. In particular, existing PPML arts cross-cut ML, system, security, and privacy; hence, there is a critical need to understand state-of-art studies, related challenges, and a roadmap for future research. This paper systematically reviews and summarizes existing privacy-preserving approaches and proposes a PGU model to guide evaluation for various PPML solutions through elaborately decomposing their privacy-preserving functionalities. The PGU model is designed as the triad of Phase, Guarantee, and technical Utility. Furthermore, we also discuss the unique characteristics and challenges of PPML and outline possible directions of future work that benefit a wide range of research communities among ML, distributed systems, security, and privacy areas.
Machine learning models have had discernible achievements in a myriad of applications. However, most of these models are black-boxes, and it is obscure how the decisions are made by them. This makes the models unreliable and untrustworthy. To provide insights into the decision making processes of these models, a variety of traditional interpretable models have been proposed. Moreover, to generate more human-friendly explanations, recent work on interpretability tries to answer questions related to causality such as Why does this model makes such decisions? or Was it a specific feature that caused the decision made by the model?. In this work, models that aim to answer causal questions are referred to as causal interpretable models. The existing surveys have covered concepts and methodologies of traditional interpretability. In this work, we present a comprehensive survey on causal interpretable models from the aspects of the problems and methods. In addition, this survey provides in-depth insights into the existing evaluation metrics for measuring interpretability, which can help practitioners understand for what scenarios each evaluation metric is suitable.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا