Do you want to publish a course? Click here

A simple numeric algorithm for ancient coin dies identification

112   0   0.0 ( 0 )
 Added by Luca Lista
 Publication date 2016
  fields Physics
and research's language is English
 Authors Luca Lista




Ask ChatGPT about the research

A simple computer-based algorithm has been developed to identify pre-modern coins minted from the same dies, intending mainly coins minted by hand-made dies designed to be applicable to images taken from auction websites or catalogs. Though the method is not intended to perform a complete automatic classification, which would require more complex and intensive algorithms accessible to experts of computer vision its simplicity of use and lack of specific requirement about the quality of pictures can provide help and complementary information to the visual inspection, adding quantitative measurements of the distance between pairs of different coins. The distance metric is based on a number of pre-defined reference points that mark key features of the coin to identify the set of coins they have been minted from.



rate research

Read More

This paper describes a new efficient conjugate subgradient algorithm which minimizes a convex function containing a least squares fidelity term and an absolute value regularization term. This method is successfully applied to the inversion of ill-conditioned linear problems, in particular for computed tomography with the dictionary learning method. A comparison with other state-of-art methods shows a significant reduction of the number of iterations, which makes this algorithm appealing for practical use.
A principal component analysis (PCA) of clean microcalorimeter pulse records can be a first step beyond statistically optimal linear filtering of pulses towards a fully non-linear analysis. For PCA to be practical on spectrometers with hundreds of sensors, an automated identification of clean pulses is required. Robust forms of PCA are the subject of active research in machine learning. We examine a version known as coherence pursuit that is simple, fast, and well matched to the automatic identification of outlier records, as needed for microcalorimeter pulse analysis.
114 - B. Alpert , E. Ferri , D. Bennett 2015
For experiments with high arrival rates, reliable identification of nearly-coincident events can be crucial. For calorimetric measurements to directly measure the neutrino mass such as HOLMES, unidentified pulse pile-ups are expected to be a leading source of experimental error. Although Wiener filtering can be used to recognize pile-up, it suffers errors due to pulse-shape variation from detector nonlinearity, readout dependence on sub-sample arrival times, and stability issues from the ill-posed deconvolution problem of recovering Dirac delta-functions from smooth data. Due to these factors, we have developed a processing method that exploits singular value decomposition to (1) separate single-pulse records from piled-up records in training data and (2) construct a model of single-pulse records that accounts for varying pulse shape with amplitude, arrival time, and baseline level, suitable for detecting nearly-coincident events. We show that the resulting processing advances can reduce the required performance specifications of the detectors and readout system or, equivalently, enable larger sensor arrays and better constraints on the neutrino mass.
Deep learning is a rapidly-evolving technology with possibility to significantly improve physics reach of collider experiments. In this study we developed a novel algorithm of vertex finding for future lepton colliders such as the International Linear Collider. We deploy two networks; one is simple fully-connected layers to look for vertex seeds from track pairs, and the other is a customized Recurrent Neural Network with an attention mechanism and an encoder-decoder structure to associate tracks to the vertex seeds. The performance of the vertex finder is compared with the standard ILC reconstruction algorithm.
The models and weights of prior trained Convolutional Neural Networks (CNN) created to perform automated isotopic classification of time-sequenced gamma-ray spectra, were utilized to provide source domain knowledge as training on new domains of potential interest. The previous results were achieved solely using modeled spectral data. In this work we attempt to transfer the knowledge gained to the new, if similar, domain of solely measured data. The ability to train on modeled data and predict on measured data will be crucial in any successful data-driven approach to this problem space.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا