Do you want to publish a course? Click here

Towards Vision-Based Smart Hospitals: A System for Tracking and Monitoring Hand Hygiene Compliance

79   0   0.0 ( 0 )
 Added by Albert Haque
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

One in twenty-five patients admitted to a hospital will suffer from a hospital acquired infection. If we can intelligently track healthcare staff, patients, and visitors, we can better understand the sources of such infections. We envision a smart hospital capable of increasing operational efficiency and improving patient care with less spending. In this paper, we propose a non-intrusive vision-based system for tracking peoples activity in hospitals. We evaluate our method for the problem of measuring hand hygiene compliance. Empirically, our method outperforms existing solutions such as proximity-based techniques and covert in-person observational studies. We present intuitive, qualitative results that analyze human movement patterns and conduct spatial analytics which convey our methods interpretability. This work is a step towards a computer-vision based smart hospital and demonstrates promising results for reducing hospital acquired infections.



rate research

Read More

Traffic management systems capture tremendous video data and leverage advances in video processing to detect and monitor traffic incidents. The collected data are traditionally forwarded to the traffic management center (TMC) for in-depth analysis and may thus exacerbate the network paths to the TMC. To alleviate such bottlenecks, we propose to utilize edge computing by equipping edge nodes that are close to cameras with computing resources (e.g. cloudlets). A cloudlet, with limited computing resources as compared to TMC, provides limited video processing capabilities. In this paper, we focus on two common traffic monitoring tasks, congestion detection, and speed detection, and propose a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC. Our solution utilizes two algorithms for each task, one implemented at the edge and the other one at the TMC, which are designed with the consideration of different computing resources. While the TMC provides strong computation power, the video quality it receives depends on the underlying network conditions. On the other hand, the edge processes very high-quality video but with limited computing resources. Our model captures this trade-off. We evaluate the performance of the proposed two-tier model as well as the traffic monitoring algorithms via test-bed experiments under different weather as well as network conditions and show that our proposed hybrid edge-cloud solution outperforms both the cloud-only and edge-only solutions.
Different from shopping in physical stores, where people have the opportunity to closely check a product (e.g., touching the surface of a T-shirt or smelling the scent of perfume) before making a purchase decision, online shoppers rely greatly on the uploaded product images to make any purchase decision. The decision-making is challenging when selling or purchasing second-hand items online since estimating the items prices is not trivial. In this work, we present a vision-based price suggestion system for the online second-hand item shopping platform. The goal of vision-based price suggestion is to help sellers set effective prices for their second-hand listings with the images uploaded to the online platforms. First, we propose to better extract representative visual features from the images with the aid of some other image-based item information (e.g., category, brand). Then, we design a vision-based price suggestion module which takes the extracted visual features along with some statistical item features from the shopping platform as the inputs to determine whether an uploaded item image is qualified for price suggestion by a binary classification model, and provide price suggestions for items with qualified images by a regression model. According to two demands from the platform, two different objective functions are proposed to jointly optimize the classification model and the regression model. For better model training, we also propose a warm-up training strategy for the joint optimization. Extensive experiments on a large real-world dataset demonstrate the effectiveness of our vision-based price prediction system.
436 - Nicolas Bus 2019
Manually checking models for compliance against building regulation is a time-consuming task for architects and construction engineers. There is thus a need for algorithms that process information from construction projects and report non-compliant elements. Still automated code-compliance checking raises several obstacles. Building regulations are usually published as human readable texts and their content is often ambiguous or incomplete. Also, the vocabulary used for expressing such regulations is very different from the vocabularies used to express Building Information Models (BIM). Furthermore, the high level of details associated to BIM-contained geometries induces complex calculations. Finally, the level of complexity of the IFC standard also hinders the automation of IFC processing tasks. Model chart, formal rules and pre-processors approach allows translating construction regulations into semantic queries. We further demonstrate the usefulness of this approach through several use cases. We argue our approach is a step forward in bridging the gap between regulation texts and automated checking algorithms. Finally with the recent building ontology BOT recommended by the W3C Linked Building Data Community Group, we identify perspectives for standardizing and extending our approach.
Computer-vision hospital systems can greatly assist healthcare workers and improve medical facility treatment, but often face patient resistance due to the perceived intrusiveness and violation of privacy associated with visual surveillance. We downsample video frames to extremely low resolutions to degrade private information from surveillance videos. We measure the amount of activity-recognition information retained in low resolution depth images, and also apply a privately-trained DCSCN super-resolution model to enhance the utility of our images. We implement our techniques with two actual healthcare-surveillance scenarios, hand-hygiene compliance and ICU activity-logging, and show that our privacy-preserving techniques preserve enough information for realistic healthcare tasks.
Computer vision based methods have been explored in the past for detection of railway track defects, but full automation has always been a challenge because both traditional image processing methods and deep learning classifiers trained from scratch fail to generalize that well to infinite novel scenarios seen in the real world, given limited amount of labeled data. Advancements have been made recently to make machine learning models utilize knowledge from a different but related domain. In this paper, we show that even though similar domain data is not available, transfer learning provides the model understanding of other real world objects and enables training production scale deep learning classifiers for uncontrolled real world data. Our models efficiently detect both track defects like sunkinks, loose ballast and railway assets like switches and signals. Models were validated with hours of track videos recorded in different continents resulting in different weather conditions, different ambience and surroundings. A track health index concept has also been proposed to monitor complete rail network.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا