No Arabic abstract
VOStat is a Web service providing interactive statistical analysis of astronomical tabular datasets. It is integrated into the suite of analysis and visualization tools associated with the international Virtual Observatory (VO) through the SAMP communication system. A user supplies VOStat with a dataset extracted from the VO, or otherwise acquired, and chooses among $sim 60$ statistical functions. These include data transformations, plots and summaries, density estimation, one- and two-sample hypothesis tests, global and local regressions, multivariate analysis and clustering, spatial analysis, directional statistics, survival analysis (for censored data like upper limits), and time series analysis. The statistical operations are performed using the public domain {bf R} statistical software environment, including a small fraction of its $>4000$ {bf CRAN} add-on packages. The purpose of VOStat is to facilitate a wider range of statistical analyses than are commonly used in astronomy, and to promote use of more advanced methodology in {bf R} and {bf CRAN}.
NASAs Kepler, K2 and TESS missions employ Simple Aperture Photometry (SAP) to derive time-series photometry, where an aperture is estimated for each star, and pixels containing each star are summed to create a single light curve. This method is simple, but in crowded fields the derived time-series can be highly contaminated. The alternate method of fitting a Point Spread Function (PSF) to the data is able to account for crowding, but is computationally expensive. In this paper, we present a new approach to extracting photometry from these time-series missions, which fits the PSF directly, but makes simplifying assumptions in order to greatly reduce the computation expense. Our method fixes the scene of the field in each image, estimates the PSF shape of the instrument with a linear model, and allows only source flux and position to vary. We demonstrate that our method is able to separate the photometry from blended targets in the Kepler dataset that are separated by less than a pixel. Our method is fast to compute, and fully accounts for uncertainties from degeneracies due to crowded fields. We name the method described in this work Linearized Field Deblending (LFD). We demonstrate our method on the false positive Kepler target koi. We are able to separate the photometry of the two sources in the data, and demonstrate the contaminating transiting signal is consistent with a small, sub-stellar companion with a radius of $2.67R_{jup}$ ($0.27R_{sol}$). Our method is equally applicable to extracting photometry from NASAs TESS mission.
Web service choreographies specify conditions on observable interactions among the services. An important question in this regard is realizability: given a choreography C, does there exist a set of service implementations I that conform to C ? Further, if C is realizable, is there an algorithm to construct implementations in I ? We propose a local temporal logic in which choreographies can be specified, and for specifications in the logic, we solve the realizability problem by constructing service implementations (when they exist) as communicating automata. These are nondeterministic finite state automata with a coupling relation. We also report on an implementation of the realizability algorithm and discuss experimental results.
While both society and astronomy have evolved greatly over the past fifty years, the academic institutions and incentives that shape our field have remained largely stagnant. As a result, the astronomical community is faced with several major challenges, including: (1) the training that we provide does not align with the skills that future astronomers will need, (2) the postdoctoral phase is becoming increasingly demanding and demoralizing, and (3) our jobs are increasingly unfriendly to families with children. Solving these problems will require conscious engineering of our profession. Fortunately, this Decadal Review offers the opportunity to revise outmoded practices to be more effective and equitable. The highest priority of the Subcommittee on the State of the Profession should be to recommend specific, funded activities that will ensure the field meets the challenges we describe.
The Astronomers Telegram (ATEL; http://fire.berkeley.edu:8080/) is a web based short-notice (<4000 characters) publication system for reporting and commenting on new astronomical observations, offering for the first time in astronomy effectively instantaneous distribution of time-critical information for the entire professional community. It is designed to take advantage of the World Wide Webs simple user interface and the ability of computer programs to provide nearly all the necessary functions. One may post a Telegram, which is instantly (<1 second) available at the Web-site, and distributed by email within 24 hours through the Daily Email Digest, which is tailored to the subject selections of each reader. Optionally, urgent Telegrams may be distributed through Instant Email Notices. While ATEL will be of particular use to observers of transient objects (such as gamma-ray bursts, microlenses, supernovae, novae, or X-ray transients) or in fields which are rapidly evolving observationally, there are no restrictions on subject matter.
The maximum entropy principle from statistical mechanics states that a closed system attains an equilibrium distribution that maximizes its entropy. We first show that for graphs with fixed number of edges one can define a stochastic edge dynamic that can serve as an effective thermalization scheme, and hence, the underlying graphs are expected to attain their maximum-entropy states, which turn out to be Erdos-Renyi (ER) random graphs. We next show that (i) a rate-equation based analysis of node degree distribution does indeed confirm the maximum-entropy principle, and (ii) the edge dynamic can be effectively implemented using short random walks on the underlying graphs, leading to a local algorithm for the generation of ER random graphs. The resulting statistical mechanical system can be adapted to provide a distributed and local (i.e., without any centralized monitoring) mechanism for load balancing, which can have a significant impact in increasing the efficiency and utilization of both the Internet (e.g., efficient web mirroring), and large-scale computing infrastructure (e.g., cluster and grid computing).