ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparsely Sampling the Sky: Regular vs Random Sampling

51   0   0.0 ( 0 )
 نشر من قبل Paniez Paykari
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and cost. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, it was shown that a sparse sampling strategy could be a powerful substitute for the contiguous observations. However, in this previous paper a regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. In this paper, we use the Bayesian experimental design to investigate a random sparse sampling, where the observed patches are randomly distributed over the total sparsely sampled area. We find that, as there is no preferred scale in the window function, the induced correlation is evenly distributed amongst all scales. This could be desirable if we are interested in specific scales in the galaxy power spectrum, such as the Baryonic Acoustic Oscillation (BAO) scales. However, for constraining the overall galaxy power spectrum and the cosmological parameters, there is no preference over regular or random sampling. Hence any approach that is practically more suitable can be chosen and we can relax the regular-grid condition for the distribution of the observed patches.

قيم البحث

اقرأ أيضاً

We study derangements of ${1,2,ldots,n}$ under the Ewens distribution with parameter $theta$. We give the moments and marginal distributions of the cycle counts, the number of cycles, and asymptotic distributions for large $n$. We develop a ${0,1}$-v alued non-homogeneous Markov chain with the property that the counts of lengths of spacings between the 1s have the derangement distribution. This chain, an analog of the so-called Feller Coupling, provides a simple way to simulate derangements in time independent of $theta$ for a given $n$ and linear in the size of the derangement.
We show that there exists a family of groups $G_n$ and nontrivial irreducible representations $rho_n$ such that, for any constant $t$, the average of $rho_n$ over $t$ uniformly random elements $g_1, ldots, g_t in G_n$ has operator norm $1$ with proba bility approaching 1 as $n rightarrow infty$. More quantitatively, we show that there exist families of finite groups for which $Omega(log log |G|)$ random elements are required to bound the norm of a typical representation below $1$. This settles a conjecture of A. Wigderson.
Differential privacy is among the most prominent techniques for preserving privacy of sensitive data, oweing to its robust mathematical guarantees and general applicability to a vast array of computations on data, including statistical analysis and m achine learning. Previous work demonstrated that concrete implementations of differential privacy mechanisms are vulnerable to statistical attacks. This vulnerability is caused by the approximation of real values to floating point numbers. This paper presents a practical solution to the finite-precision floating point vulnerability, where the inverse transform sampling of the Laplace distribution can itself be inverted, thus enabling an attack where the original value can be retrieved with non-negligible advantage. The proposed solution has the advantages of being (i) mathematically sound, (ii) generalisable to any infinitely divisible probability distribution, and (iii) of simple implementation in modern architectures. Finally, the solution has been designed to make side channel attack infeasible, because of inherently exponential, in the size of the domain, brute force attacks.
In this paper we further investigate the well-studied problem of finding a perfect matching in a regular bipartite graph. The first non-trivial algorithm, with running time $O(mn)$, dates back to K{o}nigs work in 1916 (here $m=nd$ is the number of ed ges in the graph, $2n$ is the number of vertices, and $d$ is the degree of each node). The currently most efficient algorithm takes time $O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to $O(min{m, frac{n^{2.5}ln n}{d}})$; this minimum can never be larger than $O(n^{1.75}sqrt{ln n})$. We obtain this improvement by proving a uniform sampling theorem: if we sample each edge in a $d$-regular bipartite graph independently with a probability $p = O(frac{nln n}{d^2})$ then the resulting graph has a perfect matching with high probability. The proof involves a decomposition of the graph into pieces which are guaranteed to have many perfect matchings but do not have any small cuts. We then establish a correspondence between potential witnesses to non-existence of a matching (after sampling) in any piece and cuts of comparable size in that same piece. Kargers sampling theorem for preserving cuts in a graph can now be adapted to prove our uniform sampling theorem for preserving perfect matchings. Using the $O(msqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum matchings in bipartite graphs on the sampled graph then yields the stated running time. We also provide an infinite family of instances to show that our uniform sampling result is tight up to poly-logarithmic factors (in fact, up to $ln^2 n$).
68 - Johannes Buchner 2017
The data torrent unleashed by current and upcoming astronomical surveys demands scalable analysis methods. Many machine learning approaches scale well, but separating the instrument measurement from the physical effects of interest, dealing with vari able errors, and deriving parameter uncertainties is often an after-thought. Classic forward-folding analyses with Markov Chain Monte Carlo or Nested Sampling enable parameter estimation and model comparison, even for complex and slow-to-evaluate physical models. However, these approaches require independent runs for each data set, implying an unfeasible number of model evaluations in the Big Data regime. Here I present a new algorithm, collaborative nested sampling, for deriving parameter probability distributions for each observation. Importantly, the number of physical model evaluations scales sub-linearly with the number of data sets, and no assumptions about homogeneous errors, Gaussianity, the form of the model or heterogeneity/completeness of the observations need to be made. Collaborative nested sampling has immediate application in speeding up analyses of large surveys, integral-field-unit observations, and Monte Carlo simulations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا