ترغب بنشر مسار تعليمي؟ اضغط هنا

Need for context-aware computing in astrophysics

54   0   0.0 ( 0 )
 نشر من قبل Dilip G. Banhatti Dr.
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The example of disk galaxy rotation curves is given for inferring dark matter from redundant computational procedure because proper care of astrophysical and computational context was not taken. At least three attempts that take the context into account have not found adequate voice because of haste in wrongly concluding existence of dark matter on the part of even experts. This firmly entrenched view, prevalent for about 3/4ths of a century, has now become difficult to correct. The right context must be borne in mind at every step to avoid such a situation. Perhaps other examples exist. Keywords: dark matter; disk galaxy; rotation curve; context-awareness. Topics: Algorithms; Applications.

قيم البحث

اقرأ أيضاً

We present ConXsense, the first framework for context-aware access control on mobile devices based on context classification. Previous context-aware access control systems often require users to laboriously specify detailed policies or they rely on p re-defined policies not adequately reflecting the true preferences of users. We present the design and implementation of a context-aware framework that uses a probabilistic approach to overcome these deficiencies. The framework utilizes context sensing and machine learning to automatically classify contexts according to their security and privacy-related properties. We apply the framework to two important smartphone-related use cases: protection against device misuse using a dynamic device lock and protection against sensory malware. We ground our analysis on a sociological survey examining the perceptions and concerns of users related to contextual smartphone security and analyze the effectiveness of our approach with real-world context data. We also demonstrate the integration of our framework with the FlaskDroid architecture for fine-grained access control enforcement on the Android platform.
60 - H. Li 2009
In this White Paper, we emphasize the need for and the important role of plasma astrophysics in the studies of formation, evolution of, and feedback by Active Galaxies. We make three specific recommendations: 1) We need to significantly increase the resolution of VLA, perhaps by building an EVLA-II at a modest cost. This will provide the angular resolution to study jets at kpc scales, where, for example, detailed Faraday rotation diagnosis can be done at 1GHz transverse to jets; 2) We need to build coordinated programs among NSF, NASA, and DOE to support laboratory plasma experiments (including liquid metal) that are designed to study key astrophysical processes, such as magneto-rotational instability (origin of angular momentum transport), dynamo (origin of magnetic fields), jet launching and stability. Experiments allowing access to relativistic plasma regime (perhaps by intense lasers and magnetic fields) will be very helpful for understanding the stability and dissipation physics of jets from Supermassive Black Holes; 3) Again through the coordinated support among the three Agencies, we need to invest in developing comprehensive theory and advanced simulation tools to study the accretion disks and jets in relativistic plasma physics regime, especially in connecting large scale fluid scale phenomena with relativistic kinetic dissipation physics through which multi-wavelength radiation is produced.
Zero padding is widely used in convolutional neural networks to prevent the size of feature maps diminishing too fast. However, it has been claimed to disturb the statistics at the border. As an alternative, we propose a context-aware (CA) padding ap proach to extend the image. We reformulate the padding problem as an image extrapolation problem and illustrate the effects on the semantic segmentation task. Using context-aware padding, the ResNet-based segmentation model achieves higher mean Intersection-Over-Union than the traditional zero padding on the Cityscapes and the dataset of DeepGlobe satellite imaging challenge. Furthermore, our padding does not bring noticeable overhead during training and testing.
The amount of CO$_2$ emitted per kilowatt-hour on an electricity grid varies by time of day and substantially varies by location due to the types of generation. Networked collections of warehouse scale computers, sometimes called Hyperscale Computing , emit more carbon than needed if operated without regard to these variations in carbon intensity. This paper introduces Googles system for Carbon-Intelligent Compute Management, which actively minimizes electricity-based carbon footprint and power infrastructure costs by delaying temporally flexible workloads. The core component of the system is a suite of analytical pipelines used to gather the next days carbon intensity forecasts, train day-ahead demand prediction models, and use risk-aware optimization to generate the next days carbon-aware Virtual Capacity Curves (VCCs) for all datacenter clusters across Googles fleet. VCCs impose hourly limits on resources available to temporally flexible workloads while preserving overall daily capacity, enabling all such workloads to complete within a day. Data from operation shows that VCCs effectively limit hourly capacity when the grids energy supply mix is carbon intensive and delay the execution of temporally flexible workloads to greener times.
State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density. They typically use the same filters over the whole image or over large image patches. Only then do they estimate local scale to compensate for perspective distortion. This is typically achieved by training an auxiliary classifier to select, for predefined image patches, the best kernel size among a limited set of choices. As such, these methods are not end-to-end trainable and restricted in the scope of context they can leverage. In this paper, we introduce an end-to-end trainable deep architecture that combines features obtained using multiple receptive field sizes and learns the importance of each such feature at each image location. In other words, our approach adaptively encodes the scale of the contextual information required to accurately predict crowd density. This yields an algorithm that outperforms state-of-the-art crowd counting methods, especially when perspective effects are strong.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا