ترغب بنشر مسار تعليمي؟ اضغط هنا

A Smart, Efficient, and Reliable Parking Surveillance System with Edge Artificial Intelligence on IoT Devices

78   0   0.0 ( 0 )
 نشر من قبل Ruimin Ke
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing the data partially or wholly on the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, which is a key component of Smart City. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed on both the edge and the server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system can be a solid foundation for future applications of intelligent transportation systems.



قيم البحث

اقرأ أيضاً

102 - Chenhao Xu , Yong Li , Yao Deng 2021
Federated learning (FL) utilizes edge computing devices to collaboratively train a shared model while each device can fully control its local data access. Generally, FL techniques focus on learning model on independent and identically distributed (ii d) dataset and cannot achieve satisfiable performance on non-iid datasets (e.g. learning a multi-class classifier but each client only has a single class dataset). Some personalized approaches have been proposed to mitigate non-iid issues. However, such approaches cannot handle underlying data distribution shift, namely data distribution skew, which is quite common in real scenarios (e.g. recommendation systems learn user behaviors which change over time). In this work, we provide a solution to the challenge by leveraging smart-contract with federated learning to build optimized, personalized deep learning models. Specifically, our approach utilizes smart contract to reach consensus among distributed trainers on the optimal weights of personalized models. We conduct experiments across multiple models (CNN and MLP) and multiple datasets (MNIST and CIFAR-10). The experimental results demonstrate that our personalized learning models can achieve better accuracy and faster convergence compared to classic federated and personalized learning. Compared with the model given by baseline FedAvg algorithm, the average accuracy of our personalized learning models is improved by 2% to 20%, and the convergence rate is about 2$times$ faster. Moreover, we also illustrate that our approach is secure against recent attack on distributed learning.
Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life. The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges. In this context, a wise cooperation and resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed inference, training and online learning tasks across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges.
Power grid data are going big with the deployment of various sensors. The big data in power grids creates huge opportunities for applying artificial intelligence technologies to improve resilience and reliability. This paper introduces multiple real- world applications based on artificial intelligence to improve power grid situational awareness and resilience. These applications include event identification, inertia estimation, event location and magnitude estimation, data authentication, control, and stability assessment. These applications are operating on a real-world system called FNET-GridEye, which is a wide-area measurement network and arguably the world-largest cyber-physical system that collects power grid big data. These applications showed much better performance compared with conventional approaches and accomplished new tasks that are impossible to realized using conventional technologies. These encouraging results demonstrate that combining power grid big data and artificial intelligence can uncover and capture the non-linear correlation between power grid data and its stabilities indices and will potentially enable many advanced applications that can significantly improve power grid resilience.
Optical and optoelectronic approaches of performing matrix-vector multiplication (MVM) operations have shown the great promise of accelerating machine learning (ML) algorithms with unprecedented performance. The incorporation of nanomaterials into th e system can further improve the performance thanks to their extraordinary properties, but the non-uniformity and variation of nanostructures in the macroscopic scale pose severe limitations for large-scale hardware deployment. Here, we report a new optoelectronic architecture consisting of spatial light modulators and photodetector arrays made from graphene to perform MVM. The ultrahigh carrier mobility of graphene, nearly-zero-power-consumption electro-optic control, and extreme parallelism suggest ultrahigh data throughput and ultralow-power consumption. Moreover, we develop a methodology of performing accurate calculations with imperfect components, laying the foundation for scalable systems. Finally, we perform a few representative ML algorithms, including singular value decomposition, support vector machine, and deep neural networks, to show the versatility and generality of our platform.
Current radio frequency (RF) sensors at the Edge lack the computational resources to support practical, in-situ training for intelligent spectrum monitoring, and sensor data classification in general. We propose a solution via Deep Delay Loop Reservo ir Computing (DLR), a processing architecture that supports general machine learning algorithms on compact mobile devices by leveraging delay-loop reservoir computing in combination with innovative electrooptical hardware. With both digital and photonic realizations of our design of the loops, DLR delivers reductions in form factor, hardware complexity and latency, compared to the State-of-the-Art (SoA). The main impact of the reservoir is to project the input data into a higher dimensional space of reservoir state vectors in order to linearly separate the input classes. Once the classes are well separated, traditionally complex, power-hungry classification models are no longer needed for the learning process. Yet, even with simple classifiers based on Ridge regression (RR), the complexity grows at least quadratically with the input size. Hence, the hardware reduction required for training on compact devices is in contradiction with the large dimension of state vectors. DLR employs a RR-based classifier to exceed the SoA accuracy, while further reducing power consumption by leveraging the architecture of parallel (split) loops. We present DLR architectures composed of multiple smaller loops whose state vectors are linearly combined to create a lower dimensional input into Ridge regression. We demonstrate the advantages of using DLR for two distinct applications: RF Specific Emitter Identification (SEI) for IoT authentication, and wireless protocol recognition for IoT situational awareness.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا