No Arabic abstract
Machine learning is increasingly recognized as a promising technology in the biological, biomedical, and behavioral sciences. There can be no argument that this technique is incredibly successful in image recognition with immediate applications in diagnostics including electrophysiology, radiology, or pathology, where we have access to massive amounts of annotated data. However, machine learning often performs poorly in prognosis, especially when dealing with sparse data. This is a field where classical physics-based simulation seems to remain irreplaceable. In this review, we identify areas in the biomedical sciences where machine learning and multiscale modeling can mutually benefit from one another: Machine learning can integrate physics-based knowledge in the form of governing equations, boundary conditions, or constraints to manage ill-posted problems and robustly handle sparse and noisy data; multiscale modeling can integrate machine learning to create surrogate models, identify system dynamics and parameters, analyze sensitivities, and quantify uncertainty to bridge the scales and understand the emergence of function. With a view towards applications in the life sciences, we discuss the state of the art of combining machine learning and multiscale modeling, identify applications and opportunities, raise open questions, and address potential challenges and limitations. We anticipate that it will stimulate discussion within the community of computational mechanics and reach out to other disciplines including mathematics, statistics, computer science, artificial intelligence, biomedicine, systems biology, and precision medicine to join forces towards creating robust and efficient models for biological systems.
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms.
We investigate the effects of multi-task learning using the recently introduced task of semantic tagging. We employ semantic tagging as an auxiliary task for three different NLP tasks: part-of-speech tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where negative transfer between tasks is less likely. Our findings show considerable improvements for all tasks, particularly in the learning what to share setting, which shows consistent gains across all tasks.
Different from other multiple top-quark productions, triple top-quark production requires the presence of both flavor violating neutral interaction and flavor conserving neutral interaction. We describe the interaction of triple top-quarks and up-quark in terms of two dimension-6 operators; one can be induced by a new heavy vector resonance, the other by a scalar resonance. Combining same-sign top-quark pair production and four top-quark production, we explore the potential of the 13 TeV LHC on searching for the triple top-quark production.
We discuss the features of instabilities in binary systems, in particular, for asymmetric nuclear matter. We show its relevance for the interpretation of results obtained in experiments and in ab initio simulations of the reaction between $^{124}Sn+^{124}Sn$ at 50AMeV.}
We analyze a data set comprising 370 GW band structures composed of 61716 quasiparticle (QP) energies of two-dimensional (2D) materials spanning 14 crystal structures and 52 elements. The data results from PAW plane wave based one-shot G$_0$W$_0$@PBE calculations with full frequency integration. We investigate the distribution of key quantities like the QP self-energy corrections and renormalization factor $Z$ and explore their dependence on chemical composition and magnetic state. The linear QP approximation is identified as a significant error source and propose schemes for controlling and drastically reducing this error at low computational cost. We analyze the reliability of the $1/N_text{PW}$ basis set extrapolation and find that is well-founded with narrow distributions of $r^2$ peaked very close to 1. Finally, we explore the validity of the scissors operator approximation concluding that it is generally not valid for reasonable error tolerances. Our work represents a step towards the development of automatized workflows for high-throughput G$_0$W$_0$ band structure calculations for solids.