ﻻ يوجد ملخص باللغة العربية
Scale variation remains a challenging problem for object detection. Common paradigms usually adopt multiscale training & testing (image pyramid) or FPN (feature pyramid network) to process objects in a wide scale range. However, multi-scale methods aggravate more variations of scale that even deep convolution neural networks with FPN cannot handle well. In this work, we propose an innovative paradigm called Instance Scale Normalization (ISN) to resolve the above problem. ISN compresses the scale space of objects into a consistent range (ISN range), in both training and testing phases. This reassures the problem of scale variation fundamentally and reduces the difficulty of network optimization. Experiments show that ISN surpasses multi-scale counterpart significantly for object detection, instance segmentation, and multi-task human pose estimation, on several architectures. On COCO test-dev, our single model based on ISN achieves 46.5 mAP with a ResNet-101 backbone, which is among the state-of-the-art (SOTA) candidates for object detection.
Image composition plays a common but important role in photo editing. To acquire photo-realistic composite images, one must adjust the appearance and visual style of the foreground to be compatible with the background. Existing deep learning methods
In this paper, we explore the role of Instance Normalization in low-level vision tasks. Specifically, we present a novel block: Half Instance Normalization Block (HIN Block), to boost the performance of image restoration networks. Based on HIN Block,
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regi
Feature Normalization (FN) is an important technique to help neural network training, which typically normalizes features across spatial dimensions. Most previous image inpainting methods apply FN in their networks without considering the impact of t
This paper addresses the problem of model compression via knowledge distillation. To this end, we propose a new knowledge distillation method based on transferring feature statistics, specifically the channel-wise mean and variance, from the teacher