Do you want to publish a course? Click here

Detecting Suspicious Behavior: How to Deal with Visual Similarity through Neural Networks

154   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Suspicious behavior is likely to threaten security, assets, life, or freedom. This behavior has no particular pattern, which complicates the tasks to detect it and define it. Even for human observers, it is complex to spot suspicious behavior in surveillance videos. Some proposals to tackle abnormal and suspicious behavior-related problems are available in the literature. However, they usually suffer from high false-positive rates due to different classes with high visual similarity. The Pre-Crime Behavior method removes information related to a crime commission to focus on suspicious behavior before the crime happens. The resulting samples from different types of crime have a high-visual similarity with normal-behavior samples. To address this problem, we implemented 3D Convolutional Neural Networks and trained them under different approaches. Also, we tested different values in the number-of-filter parameter to optimize computational resources. Finally, the comparison between the performance using different training approaches shows the best option to improve the suspicious behavior detection on surveillance videos.



rate research

Read More

Visual design is associated with the use of some basic design elements and principles. Those are applied by the designers in the various disciplines for aesthetic purposes, relying on an intuitive and subjective process. Thus, numerical analysis of design visuals and disclosure of the aesthetic value embedded in them are considered as hard. However, it has become possible with emerging artificial intelligence technologies. This research aims at a neural network model, which recognizes and classifies the design principles over different domains. The domains include artwork produced since the late 20th century; professional photos; and facade pictures of contemporary buildings. The data collection and curation processes, including the production of computationally-based synthetic dataset, is genuine. The proposed model learns from the knowledge of myriads of original designs, by capturing the underlying shared patterns. It is expected to consolidate design processes by providing an aesthetic evaluation of the visual compositions with objectivity.
Imperfect labels limit the quality of predictions learned by deep neural networks. This is particularly relevant in medical image segmentation, where reference annotations are difficult to collect and vary significantly even across expert annotators. Prior work on mitigating label noise focused on simple models of mostly uniform noise. In this work, we explore biased and unbiased errors artificially introduced to brain tumour annotations on MRI data. We found that supervised and semi-supervised segmentation methods are robust or fairly robust to unbiased errors but sensitive to biased errors. It is therefore important to identify the sorts of errors expected in medical image labels and especially mitigate the biased errors.
Current technology for autonomous cars primarily focuses on getting the passenger from point A to B. Nevertheless, it has been shown that passengers are afraid of taking a ride in self-driving cars. One way to alleviate this problem is by allowing the passenger to give natural language commands to the car. However, the car can misunderstand the issued command or the visual surroundings which could lead to uncertain situations. It is desirable that the self-driving car detects these situations and interacts with the passenger to solve them. This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it. Optionally, a question generated by the system describing the uncertain objects is included. We argue that if the car could explain the objects in a human-like way, passengers could gain more confidence in the cars abilities. Thus, we investigate how to (1) detect uncertain situations and their underlying causes, and (2) how to generate clarifying questions for the passenger. When evaluating on the Talk2Car dataset, we show that the proposed model, acrfull{pipeline}, improves gls{m:ambiguous-absolute-increase} in terms of $IoU_{.5}$ compared to not using gls{pipeline}. Furthermore, we designed a referring expression generator (REG) acrfull{reg_model} tailored to a self-driving car setting which yields a relative improvement of gls{m:meteor-relative} METEOR and gls{m:rouge-relative} ROUGE-l compared with state-of-the-art REG models, and is three times faster.
The current public sense of anxiety in dealing with disinformation as manifested by so-called fake news is acutely displayed by the reaction to recent events prompted by a belief in conspiracies among certain groups. A model to deal with disinformation is proposed; it is based on a demonstration of the analogous behavior of disinformation to that of wave phenomena. Two criteria form the basis to combat the deleterious effects of disinformation: the use of a refractive medium based on skepticism as the default mode, and polarization as a filter mechanism to analyze its merits based on evidence. Critical thinking is enhanced since the first one tackles the pernicious effect of the confirmation bias, and the second the tendency towards attribution, both of which undermine our efforts to think and act rationally. The benefits of such a strategy include an epistemic reformulation of disinformation as an independently existing phenomenon, that removes its negative connotations when perceived as being possessed by groups or individuals.
Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by stridded operations in deconvNet-based visualizations. Moreover, we introduce an8Flower, a dataset specifically designed for objective quantitative evaluation of methods for visual explanation.Experiments on the MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا