No Arabic abstract
In this paper we review the application of the matched filter (MF) technique and its application to detect weak, deterministic, smooth signals in a stationary, random, Gaussian noise. This is particular suitable in astronomy to detect emission lines in spectra and point-sources in two-dimensional maps. A detailed theoretical development is already available in many books (e.g. Kay 1998; Poor 1994; McNicol 2005; Hippenstiel 2002; Macmillan & Creelma 2005; Wickens 2002; Barkat 2005; Tuzlukov 2001; Levy 2008). Our aim is to examine some practical issues that are typically ignored in textbooks or even in specialized literature as, for example, the effects of the discretization of the signals and the non-Gaussian nature of the noise. To this goal we present each item in the form of answers to specific questions. The relative mathematics and its demonstration are kept to a bare simplest minimum, in the hope of a better understanding of the real performances of the MF in practical applications. For the ease of formalism, arguments will be developed for one-dimensional signals. The extension to the two-dimensional signals is trivial and will be highlighted in dedicated sections.
In this paper we study the subset of generalized quantum measurements on finite dimensional systems known as local operations and classical communication (LOCC). While LOCC emerges as the natural class of operations in many important quantum information tasks, its mathematical structure is complex and difficult to characterize. Here we provide a precise description of LOCC and related operational classes in terms of quantum instruments. Our formalism captures both finite round protocols as well as those that utilize an unbounded number of communication rounds. While the set of LOCC is not topologically closed, we show that finite round LOCC constitutes a compact subset of quantum operations. Additionally we show the existence of an open ball around the completely depolarizing map that consists entirely of LOCC implementable maps. Finally, we demonstrate a two-qubit map whose action can be approached arbitrarily close using LOCC, but nevertheless cannot be implemented perfectly.
These notes discuss, in a style intended for physicists, how to average data and fit it to some functional form. I try to make clear what is being calculated, what assumptions are being made, and to give a derivation of results rather than just quote them. The aim is put a lot useful pedagogical material together in a convenient place. This manuscript is a substantial enlargement of lecture notes I prepared for the Bad Honnef School on Efficient Algorithms in Computational Physics, September 10-14, 2012.
We present an overview of all the observations (radio - VLA, MERLIN, VLBA,EVN - and optical - WFPC2 and NICMOS -) that were initially used to confirm the gravitational lens nature of the double JVAS system B1030+074. Since the 1.56 arcsec system showed some first indication of variability it has been monitored with the VLA and MERLIN to confirm its variable nature. We also present new VLBA observations of the lens system at 1.7 GHz that have unveiled detailed structure of the jet in the strong component and first detection of the jet in the faint component.
Disassembly of binary code is hard, but necessary for improving the security of binary software. Over the past few decades, research in binary disassembly has produced many tools and frameworks, which have been made available to researchers and security professionals. These tools employ a variety of strategies that grant them different characteristics. The lack of systematization, however, impedes new research in the area and makes selecting the right tool hard, as we do not understand the strengths and weaknesses of existing tools. In this paper, we systematize binary disassembly through the study of nine popular, open-source tools. We couple the manual examination of their code bases with the most comprehensive experimental evaluation (thus far) using 3,788 binaries. Our study yields a comprehensive description and organization of strategies for disassembly, classifying them as either algorithm or else heuristic. Meanwhile, we measure and report the impact of individual algorithms on the results of each tool. We find that while principled algorithms are used by all tools, they still heavily rely on heuristics to increase code coverage. Depending on the heuristics used, different coverage-vs-correctness trade-offs come in play, leading to tools with different strengths and weaknesses. We envision that these findings will help users pick the right tool and assist researchers in improving binary disassembly.
Summarising data as text helps people make sense of it. It also improves data discovery, as search algorithms can match this text against keyword queries. In this paper, we explore the characteristics of text summaries of data in order to understand how meaningful summaries look like. We present two complementary studies: a data-search diary study with 69 students, which offers insight into the information needs of people searching for data; and a summarisation study, with a lab and a crowdsourcing component with overall 80 data-literate participants, which produced summaries for 25 datasets. In each study we carried out a qualitative analysis to identify key themes and commonly mentioned dataset attributes, which people consider when searching and making sense of data. The results helped us design a template to create more meaningful textual representations of data, alongside guidelines for improving data-search experience overall.