Do you want to publish a course? Click here

How hard can it be? Estimating the difficulty of visual search in an image

401   0   0.0 ( 0 )
 Added by Radu Tudor Ionescu
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

We address the problem of estimating image difficulty defined as the human response time for solving a visual search task. We collect human annotations of image difficulty for the PASCAL VOC 2012 data set through a crowd-sourcing platform. We then analyze what human interpretable image properties can have an impact on visual search difficulty, and how accurate are those properties for predicting difficulty. Next, we build a regression model based on deep features learned with state of the art convolutional neural networks and show better results for predicting the ground-truth visual search difficulty scores produced by human annotators. Our model is able to correctly rank about 75% image pairs according to their difficulty score. We also show that our difficulty predictor generalizes well to new classes not seen during training. Finally, we demonstrate that our predicted difficulty scores are useful for weakly supervised object localization (8% improvement) and semi-supervised object classification (1% improvement).



rate research

Read More

124 - G. Ghisellini 2008
Recent Cerenkov observations of the two BL Lac objects PKS 2155-304 and Mkn 501 revealed TeV flux variability by a factor ~2 in just 3-5 minutes. Even accounting for the effects of relativistic beaming, such short timescales are challenging simple and conventional emitting models, and call for alternative ideas. We explore the possibility that extremely fast variable emission might be produced by particles streaming at ultra-relativistic speeds along magnetic field lines and inverse Compton scattering any radiation field already present. This would produce extremely collimated beams of TeV photons. While the probability for the line of sight to be within such a narrow cone of emission would be negligibly small, one would expect that the process is not confined to a single site, but can take place in many very localised regions, along almost straight magnetic lines. A possible astrophysical setting realising these conditions is magneto-centrifugal acceleration of beams of particles. In this scenario, the variability timescale would not be related to the physical dimension of the emitting volume, but might be determined by either the typical duration of the process responsible for the production of these high energy particle beams or by the coherence length of the magnetic field. It is predicted that even faster TeV variability - with no X-ray counterpart - should be observed by the foreseen more sensitive Cerenkov telescopes.
137 - Sofia Wechsler 2009
The concept of realism in quantum mechanics means that results of measurement are caused by physical variables, hidden or observable. Local hidden variables were proved unable to explain results of measurements on entangled particles tested far away from one another. Then, some physicists embraced the idea of nonlocal hidden variables. The present article proves that this idea is problematic, that it runs into an impasse vis-`a-vis the special relativity.
Detection of entangled states is essential in both fundamental and applied quantum physics. However, this task proves to be challenging especially for general quantum states. One can execute full state tomography but this method is time demanding especially in complex systems. Other approaches use entanglement witnesses, these methods tend to be less demanding but lack reliability. Here, we demonstrate that ANN -- artificial neural networks provide a balance between both approaches. In this paper, we make a comparison of ANN performance against witness-based methods for random general 2-qubit quantum states without any prior information on the states. Furthermore, we apply our approach to real experimental data set.
The angular momentum of the Kerr singularity should not be larger than a threshold value so that it is enclosed by an event horizon: The Kerr singularity with the angular momentum exceeding the threshold value is naked. This fact suggests that if the cosmic censorship exists in our Universe, an over-spinning body without releasing its angular momentum cannot collapse to spacetime singularities. A simple kinematical estimate of two particles approaching each other supports this expectation and suggests the existence of a minimum size of an over-spinning body. But this does not imply that the geometry near the naked singularity cannot appear. By analyzing initial data, i.e., a snapshot of a spinning body, we see that an over-spinning body may produce a geometry close to the Kerr naked singularity around itself at least as a transient configuration.
180 - Fei Ma , Ping Wang , Bing Yao 2019
The bloom of complex network study, in particular, with respect to scale-free ones, is considerably triggering the research of scale-free graph itself. Therefore, a great number of interesting results have been reported in the past, including bounds of diameter. In this paper, we focus mainly on a problem of how to analytically estimate the lower bound of diameter of scale-free graph, i.e., how small scale-free graph can be. Unlike some pre-existing methods for determining the lower bound of diameter, we make use of a constructive manner in which one candidate model $mathcal{G^*} (mathcal{V^*}, mathcal{E^*})$ with ultra-small diameter can be generated. In addition, with a rigorous proof, we certainly demonstrate that the diameter of graph $mathcal{G^{*}}(mathcal{V^{*}},mathcal{E^{*}})$ must be the smallest in comparison with that of any scale-free graph. This should be regarded as the tight lower bound.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا