Do you want to publish a course? Click here

COCO-Stuff: Thing and Stuff Classes in Context

86   0   0.0 ( 0 )
 Added by Holger Caesar
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.



rate research

Read More

We classify condensed matter systems in terms of the spacetime symmetries they spontaneously break. In particular, we characterize condensed matter itself as any state in a Poincare-invariant theory that spontaneously breaks Lorentz boosts while preserving at large distances some form of spatial translations, time-translations, and possibly spatial rotations. Surprisingly, the simplest, most minimal system achieving this symmetry breaking pattern---the framid---does not seem to be realized in Nature. Instead, Nature usually adopts a more cumbersome strategy: that of introducing internal translational symmetries---and possibly rotational ones---and of spontaneously breaking them along with their space-time counterparts, while preserving unbroken diagonal subgroups. This symmetry breaking pattern describes the infrared dynamics of ordinary solids, fluids, superfluids, and---if they exist---supersolids. A third, extra-ordinary, possibility involves replacing these internal symmetries with other symmetries that do not commute with the Poincare group, for instance the galileon symmetry, supersymmetry or gauge symmetries. Among these options, we pick the systems based on the galileon symmetry, the galileids, for a more detailed study. Despite some similarity, all different patterns produce truly distinct physical systems with different observable properties. For instance, the low-energy $2to 2$ scattering amplitudes for the Goldstone excitations in the cases of framids, solids and galileids scale respectively as $E^2$, $E^4$, and $E^6$. Similarly the energy momentum tensor in the ground state is trivial for framids ($rho +p=0$), normal for solids ($rho+p>0$) and even inhomogenous for galileids.
We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and co-occurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.
We propose PanopticFusion, a novel online volumetric semantic mapping system at the level of stuff and things. In contrast to previous semantic mapping systems, PanopticFusion is able to densely predict class labels of a background region (stuff) and individually segment arbitrary foreground objects (things). In addition, our system has the capability to reconstruct a large-scale scene and extract a labeled mesh thanks to its use of a spatially hashed volumetric map representation. Our system first predicts pixel-wise panoptic labels (class labels for stuff regions and instance IDs for thing regions) for incoming RGB frames by fusing 2D semantic and instance segmentation outputs. The predicted panoptic labels are integrated into the volumetric map together with depth measurements while keeping the consistency of the instance IDs, which could vary frame to frame, by referring to the 3D map at that moment. In addition, we construct a fully connected conditional random field (CRF) model with respect to panoptic labels for map regularization. For online CRF inference, we propose a novel unary potential approximation and a map division strategy. We evaluated the performance of our system on the ScanNet (v2) dataset. PanopticFusion outperformed or compared with state-of-the-art offline 3D DNN methods in both semantic and instance segmentation benchmarks. Also, we demonstrate a promising augmented reality application using a 3D panoptic map generated by the proposed system.
We present a scheduler that improves cluster utilization and job completion times by packing tasks having multi-resource requirements and inter-dependencies. While the problem is algorithmically very hard, we achieve near-optimality on the job DAGs that appear in production clusters at a large enterprise and in benchmarks such as TPC-DS. A key insight is that carefully handling the long-running tasks and those with tough-to-pack resource needs will produce good-enough schedules. However, which subset of tasks to treat carefully is not clear (and intractable to discover). Hence, we offer a search procedure that evaluates various possibilities and outputs a preferred schedule order over tasks. An online component enforces the schedule orders desired by the various jobs running on the cluster. In addition, it packs tasks, overbooks the fungible resources and guarantees bounded unfairness for a variety of desirable fairness schemes. Relative to the state-of-the art schedulers, we speed up 50% of the jobs by over 30% each.
136 - Coral Wheeler 2015
We present FIRE/Gizmo hydrodynamic zoom-in simulations of isolated dark matter halos, two each at the mass of classical dwarf galaxies ($M_{rm vir} simeq 10^{10} M_{odot}$) and ultra-faint galaxies ($M_{rm vir} simeq 10^9 M_{odot}$), and with two feedback implementations. The resultant central galaxies lie on an extrapolated abundance matching relation from $M_{star} simeq 10^6$ to $10^4 M_{odot}$ without a break. Every host is filled with subhalos, many of which form stars. Our dwarfs with $M_{star} simeq 10^6 M_{odot}$ each have 1-2 well-resolved satellites with $M_{star} = 3-200 times 10^3 M_{odot}$. Even our isolated ultra-faint galaxies have star-forming subhalos. If this is representative, dwarf galaxies throughout the universe should commonly host tiny satellite galaxies of their own. We combine our results with the ELVIS simulations to show that targeting $sim 50~ rm kpc$ regions around nearby isolated dwarfs could increase the chances of discovering ultra-faint galaxies by $sim 35%$ compared to random halo pointings, and specifically identify the region around the Phoenix dwarf galaxy as a good potential target. The well-resolved ultra-faint galaxies in our simulations ($M_{star} simeq 3 - 30 times 10^3 M_{odot}$) form within $M_{rm peak} simeq 0.5 - 3 times 10^9 M_{odot}$ halos. Each has a uniformly ancient stellar population ($ > 10~ rm Gyr$) owing to reionization-related quenching. More massive systems, in contrast, all have late-time star formation. Our results suggest that $M_{rm halo} simeq 5 times 10^9 M_{odot}$ is a probable dividing line between halos hosting reionization fossils and those hosting dwarfs that can continue to form stars in isolation after reionization.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا