ترغب بنشر مسار تعليمي؟ اضغط هنا

The Right Tools for the Job: The Case for Spatial Science Tool-Building

61   0   0.0 ( 0 )
 نشر من قبل Geoff Boeing
 تاريخ النشر 2020
والبحث باللغة English
 تأليف Geoff Boeing




اسأل ChatGPT حول البحث

This paper was presented as the 8th annual Transactions in GIS plenary address at the American Association of Geographers annual meeting in Washington, DC. The spatial sciences have recently seen growing calls for more accessible software and tools that better embody geographic science and theory. Urban spatial network science offers one clear opportunity: from multiple perspectives, tools to model and analyze nonplanar urban spatial networks have traditionally been inaccessible, atheoretical, or otherwise limiting. This paper reflects on this state of the field. Then it discusses the motivation, experience, and outcomes of developing OSMnx, a tool intended to help address this. Next it reviews this tools use in the recent multidisciplinary spatial network science literature to highlight upstream and downstream benefits of open-source software development. Tool-building is an essential but poorly incentivized component of academic geography and social science more broadly. To conduct better science, we need to build better tools. The paper concludes with paths forward, emphasizing open-source software and reusable computational data science beyond mere reproducibility and replicability.



قيم البحث

اقرأ أيضاً

80 - P.A. Kienzle 2002
Tcl/tk provides for fast and flexible interface design but slow and cumbersome vector processing. Octave provides fast and flexible vector processing but slow and cumbersome interface design. Calling octave from tcl gives you the flexibility to do a broad range of fast numerical manipulations as part of an embedded GUI. We present a way to communicate between them.
In the past few decades, constitution-making processes have shifted from closed elite writing to incorporating democratic mechanisms. Yet, little is known about democratic participation in deliberative constitution-making processes. Here, we study a deliberative constituent process held by the Chilean government between 2015 and 2016. The Chilean process had the highest level of citizen participation in the world ($204,402$ people, i.e., $1.3%$ of the population) for such a process and covered $98%$ of the national territory. In its participatory phase, people gathered in self-convoked groups of 10 to 30 members, and they collectively selected, deliberated, and wrote down an argument on why the new constitution should include those social rights. To understand the citizen participation drivers in this volunteer process, we first identify the determinants at the municipality level. We find the educational level, engagement in politics, support for the (left-wing) government, and Internet access increased participation. In contrast, population density and the share of evangelical Christians decreased participation. Moreover, we do not find evidence of political manipulation on citizen participation. In light of those determinants, we analyze the collective selection of social rights, and the content produced during the deliberative phase. The findings suggest that the knowledge embedded in cities, proxied using education levels and main economic activity, facilitates deliberation about themes, concepts, and ideas. These results can inform the organization of new deliberative processes that involve voluntary citizen participation, from citizen consultations to constitution-making processes.
Hierarchical model fitting has become commonplace for case-control studies of cognition and behaviour in mental health. However, these techniques require us to formalise assumptions about the data-generating process at the group level, which may not be known. Specifically, researchers typically must choose whether to assume all subjects are drawn from a common population, or to model them as deriving from separate populations. These assumptions have profound implications for computational psychiatry, as they affect the resulting inference (latent parameter recovery) and may conflate or mask true group-level differences. To test these assumptions we ran systematic simulations on synthetic multi-group behavioural data from a commonly used multi-armed bandit task (reinforcement learning task). We then examined recovery of group differences in latent parameter space under the two commonly used generative modelling assumptions: (1) modelling groups under a common shared group-level prior (assuming all participants are generated from a common distribution, and are likely to share common characteristics); (2) modelling separate groups based on symptomatology or diagnostic labels, resulting in separate group-level priors. We evaluated the robustness of these approaches to variations in data quality and prior specifications on a variety of metrics. We found that fitting groups separately (assumptions 2), provided the most accurate and robust inference across all conditions. Our results suggest that when dealing with data from multiple clinical groups, researchers should analyse patient and control groups separately as it provides the most accurate and robust recovery of the parameters of interest.
Gammapy is a Python package for high-level gamma-ray data analysis built on Numpy, Scipy and Astropy. It enables us to analyze gamma-ray data and to create sky images, spectra and lightcurves, from event lists and instrument response information, and to determine the position, morphology and spectra of gamma-ray sources. So far Gammapy has mostly been used to analyze data from H.E.S.S. and Fermi-LAT, and is now being used for the simulation and analysis of observations from the Cherenkov Telescope Array (CTA). We have proposed Gammapy as a prototype for the CTA science tools. This contribution gives an overview of the Gammapy package and project and shows an analysis application example with simulated CTA data.
The ability to accurately estimate job runtime properties allows a scheduler to effectively schedule jobs. State-of-the-art online cluster job schedulers use history-based learning, which uses past job execution information to estimate the runtime pr operties of newly arrived jobs. However, with fast-paced development in cluster technology (in both hardware and software) and changing user inputs, job runtime properties can change over time, which lead to inaccurate predictions. In this paper, we explore the potential and limitation of real-time learning of job runtime properties, by proactively sampling and scheduling a small fraction of the tasks of each job. Such a task-sampling-based approach exploits the similarity among runtime properties of the tasks of the same job and is inherently immune to changing job behavior. Our study focuses on two key questions in comparing task-sampling-based learning (learning in space) and history-based learning (learning in time): (1) Can learning in space be more accurate than learning in time? (2) If so, can delaying scheduling the remaining tasks of a job till the completion of sampled tasks be more than compensated by the improved accuracy and result in improved job performance? Our analytical and experimental analysis of 3 production traces with different skew and job distribution shows that learning in space can be substantially more accurate. Our simulation and testbed evaluation on Azure of the two learning approaches anchored in a generic job scheduler using 3 production cluster job traces shows that despite its online overhead, learning in space reduces the average Job Completion Time (JCT) by 1.28x, 1.56x, and 1.32x compared to the prior-art history-based predictor.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا