Possible inaccuracies in the determination of periods from short-term time series caused by disregard of the real course of light curves and instrumental trends are documented on the example of the period analysis of simulated TESS-like light curve by notorious Lomb-Scargle method.
The creation of a 3D map of the bulge using RRLyrae (RRL) is one of the main goals of the VVV(X) surveys. The overwhelming number of sources under analysis request the use of automatic procedures. In this context, previous works introduced the use of
Machine Learning (ML) methods for the variable star classification. Our goal is the development and analysis of an automatic procedure, based on ML, for the identification of RRLs in the VVV Survey. This procedure will be use to generate reliable catalogs integrated over several tiles in the survey. After the reconstruction of light-curves, we extract a set of period and intensity-based features. We use for the first time a new subset of pseudo color features. We discuss all the appropriate steps needed to define our automatic pipeline: selection of quality measures; sampling procedures; classifier setup and model selection. As final result, we construct an ensemble classifier with an average Recall of 0.48 and average Precision of 0.86 over 15 tiles. We also make available our processed datasets and a catalog of candidate RRLs. Perhaps most interestingly, from a classification perspective based on photometric broad-band data, is that our results indicate that Color is an informative feature type of the RRL that should be considered for automatic classification methods via ML. We also argue that Recall and Precision in both tables and curves are high quality metrics for this highly imbalanced problem. Furthermore, we show for our VVV data-set that to have good estimates it is important to use the original distribution more than reduced samples with an artificial balance. Finally, we show that the use of ensemble classifiers helps resolve the crucial model selection step, and that most errors in the identification of RRLs are related to low quality observations of some sources or to the difficulty to resolve the RRL-C type given the date.
How far can we use multi-wavelength cross-identifications to deconvolve far-infrared images? In this short research note I explore a test case of CLEAN deconvolutions of simulated confused 850 micron SCUBA-2 data, and explore the possible scientific
applications of combining this data with ostensibly deeper TolTEC Large Scale Structure (LSS) survey 1.1mm-2mm data. I show that the SCUBA-2 can be reconstructed to the 1.1mm LMT resolution and achieve an 850 micron deconvolved sensitivity of 0.7 mJy RMS, an improvement of at least ~1:5x over naive point source filtered images. The TolTEC/SCUBA-2 combination can constrain cold (<10K) observed-frame colour temperatures, where TolTEC alone cannot.
We address the problem that dynamical masses of high-redshift massive galaxies, derived using virial scaling, often come out lower than stellar masses inferred from population fitting to multi-band photometry. We compare dynamical and stellar masses
for various samples spanning ranges of mass, compactness and redshift, including the SDSS. The discrepancy between dynamical and stellar masses occurs both at low and high redshifts, and systematically increases with galaxy compactness. Because it is unlikely that stellar masses show systematic errors with galaxy compactness, the correlation of mass discrepancy with compactness points to errors in the dynamical mass estimates which assume homology with massive, nearby ellipticals. We quantify the deviations from homology and propose specific non-virial scaling of dynamical mass with effective radius and velocity dispersion.
How would observers differentiate Beacons from pulsars or other exotic sources, in light of likely Beacon observables? Bandwidth, pulse width and frequency may be distinguishing features. Such transients could be evidence of civilizations slightly higher than ourselves on the Kardashev scale.
Recently, speech enhancement (SE) based on deep speech prior has attracted much attention, such as the variational auto-encoder with non-negative matrix factorization (VAE-NMF) architecture. Compared to conventional approaches that represent clean sp
eech by shallow models such as Gaussians with a low-rank covariance, the new approach employs deep generative models to represent the clean speech, which often provides a better prior. Despite the clear advantage in theory, we argue that deep priors must be used with much caution, since the likelihood produced by a deep generative model does not always coincide with the speech quality. We designed a comprehensive study on this issue and demonstrated that based on deep speech priors, a reasonable SE performance can be achieved, but the results might be suboptimal. A careful analysis showed that this problem is deeply rooted in the disharmony between the flexibility of deep generative models and the nature of the maximum-likelihood (ML) training.