No Arabic abstract
We have developed a method that maps large astronomical images onto a two-dimensional map and clusters them. A combination of various state-of-the-art machine learning (ML) algorithms is used to develop a fully unsupervised image quality assessment and clustering system. Our pipeline consists of a data pre-processing step where individual image objects are identified in a large astronomical image and converted to smaller pixel images. This data is then fed to a deep convolutional autoencoder jointly trained with a self-organizing map (SOM). This part can be used as a recommendation system. The resulting output is eventually mapped onto a two-dimensional grid using a second, deep, SOM. We use data taken from ground-based telescopes and, as a case study, compare the systems ability and performance with the results obtained by supervised methods presented by Teimoorinia et al. (2020). The availability of target labels in this data allowed a comprehensive performance comparison between our unsupervised and supervised methods. In addition to image-quality assessments performed in this project, our method can have various other applications. For example, it can help experts label images in a considerably shorter time with minimum human intervention. It can also be used as a content-based recommendation system capable of filtering images based on the desired content.
We present a machine learning based information retrieval system for astronomical observatories that tries to address user defined queries related to an instrument. In the modern instrumentation scenario where heterogeneous systems and talents are simultaneously at work, the ability to supply with the right information helps speeding up the detector maintenance operations. Enhancing the detector uptime leads to increased coincidence observation and improves the likelihood for the detection of astrophysical signals. Besides, such efforts will efficiently disseminate technical knowledge to a wider audience and will help the ongoing efforts to build upcoming detectors like the LIGO-India etc even at the design phase to foresee possible challenges. The proposed method analyses existing documented efforts at the site to intelligently group together related information to a query and to present it on-line to the user. The user in response can further go into interesting links and find already developed solutions or probable ways to address the present situation optimally. A web application that incorporates the above idea has been implemented and tested for LIGO Livingston, LIGO Hanford and Virgo observatories.
In this study, we address the challenges in developing a deep learning-based automatic patent citation recommendation system. Although deep learning-based recommendation systems have exhibited outstanding performance in various domains (such as movies, products, and paper citations), their validity in patent citations has not been investigated, owing to the lack of a freely available high-quality dataset and relevant benchmark model. To solve these problems, we present a novel dataset called PatentNet that includes textual information and metadata for approximately 110,000 patents from the Google Big Query service. Further, we propose strong benchmark models considering the similarity of textual information and metadata (such as cooperative patent classification code). Compared with existing recommendation methods, the proposed benchmark method achieved a mean reciprocal rank of 0.2377 on the test set, whereas the existing state-of-the-art recommendation method achieved 0.2073.
We present AstroVaDEr, a variational autoencoder designed to perform unsupervised clustering and synthetic image generation using astronomical imaging catalogues. The model is a convolutional neural network that learns to embed images into a low dimensional latent space, and simultaneously optimises a Gaussian Mixture Model (GMM) on the embedded vectors to cluster the training data. By utilising variational inference, we are able to use the learned GMM as a statistical prior on the latent space to facilitate random sampling and generation of synthetic images. We demonstrate AstroVaDErs capabilities by training it on gray-scaled textit{gri} images from the Sloan Digital Sky Survey, using a sample of galaxies that are classified by Galaxy Zoo 2. An unsupervised clustering model is found which separates galaxies based on learned morphological features such as axis ratio, surface brightness profile, orientation and the presence of companions. We use the learned mixture model to generate synthetic images of galaxies based on the morphological profiles of the Gaussian components. AstroVaDEr succeeds in producing a morphological classification scheme from unlabelled data, but unexpectedly places high importance on the presence of companion objects---demonstrating the importance of human interpretation. The network is scalable and flexible, allowing for larger datasets to be classified, or different kinds of imaging data. We also demonstrate the generative properties of the model, which allow for realistic synthetic images of galaxies to be sampled from the learned classification scheme. These can be used to create synthetic image catalogs or to perform image processing tasks such as deblending.
While significant attention has been recently focused on designing supervised deep semantic segmentation algorithms for vision tasks, there are many domains in which sufficient supervised pixel-level labels are difficult to obtain. In this paper, we revisit the problem of purely unsupervised image segmentation and propose a novel deep architecture for this problem. We borrow recent ideas from supervised semantic segmentation methods, in particular by concatenating two fully convolutional networks together into an autoencoder--one for encoding and one for decoding. The encoding layer produces a k-way pixelwise prediction, and both the reconstruction error of the autoencoder as well as the normalized cut produced by the encoder are jointly minimized during training. When combined with suitable postprocessing involving conditional random field smoothing and hierarchical segmentation, our resulting algorithm achieves impressive results on the benchmark Berkeley Segmentation Data Set, outperforming a number of competing methods.
Optical intensity interferometry, developed in the 1950s, is a simple and inexpensive method for achieving angular resolutions on microarcsecond scales. Its low sensitivity has limited intensity interferometric observations to bright stars so far. Substantial improvements are possible by using avalanche photodiodes (APDs) as light detectors. Several recent experiments used APDs in single-photon detection mode; however, these either provide low electronic bandwidths (few MHz) or require very narrow optical bandpasses. We present here the results of laboratory measurements with a prototype astronomical intensity interferometer using two APDs observing an artificial star in continuous (linear) detection mode with an electronic bandwidth of 100~MHz. We find a photon--photon correlation of about $10^{-6}$, as expected from the ratio of the coherence times of the light source and the detectors. In a configuration where both detectors are on the optical axis (zero baseline), we achieve a signal-to-noise ratio of $sim$2700 after 10 minutes of integration. When measuring the correlation as a function of baseline, we find a Gaussian correlation profile with a standard deviation corresponding to an angular half-width of the artificial star of $0.55$, in agreement with the estimate by the manufacturer. Our results demonstrate the possibility to construct large astronomical intensity interferometers using linear-mode APDs.