ترغب بنشر مسار تعليمي؟ اضغط هنا

Coarse-graining and fluctuations: Two birds with one stone

108   0   0.0 ( 0 )
 نشر من قبل Mark A. Peletier
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We show how the mathematical structure of large-deviation principles matches well with the concept of coarse-graining. For those systems with a large-deviation principle, this may lead to a general approach to coarse-graining through the variational form of the large-deviation functional.

قيم البحث

اقرأ أيضاً

This work is concerned with model reduction of stochastic differential equations and builds on the idea of replacing drift and noise coefficients of preselected relevant, e.g. slow variables by their conditional expectations. We extend recent results by Legoll & Leli`evre [Nonlinearity 23, 2131, 2010] and Duong et al. [Nonlinearity 31, 4517, 2018] on effective reversible dynamics by conditional expectations to the setting of general non-reversible processes with non-constant diffusion coefficient. We prove relative entropy and Wasserstein error estimates for the difference between the time marginals of the effective and original dynamics as well as an entropy error bound for the corresponding path space measures. A comparison with the averaging principle for systems with time-scale separation reveals that, unlike in the reversible setting, the effective dynamics for a non-reversible system need not agree with the averaged equations. We present a thorough comparison for the Ornstein-Uhlenbeck process and make a conjecture about necessary and sufficient conditions for when averaged and effective dynamics agree for nonlinear non-reversible processes. The theoretical results are illustrated with suitable numerical examples.
Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an i nference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy.
The advances in pre-trained models (e.g., BERT, XLNET and etc) have largely revolutionized the predictive performance of various modern natural language processing tasks. This allows corporations to provide machine learning as a service (MLaaS) by en capsulating fine-tuned BERT-based models as commercial APIs. However, previous works have discovered a series of vulnerabilities in BERT- based APIs. For example, BERT-based APIs are vulnerable to both model extraction attack and adversarial example transferrability attack. However, due to the high capacity of BERT-based APIs, the fine-tuned model is easy to be overlearned, what kind of information can be leaked from the extracted model remains unknown and is lacking. To bridge this gap, in this work, we first present an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model) by only querying a limited number of queries. We further develop an effective attribute inference attack to expose the sensitive attribute of the training data used by the BERT-based APIs. Our extensive experiments on benchmark datasets under various realistic settings demonstrate the potential vulnerabilities of BERT-based APIs.
In this paper we present a variational technique that handles coarse-graining and passing to a limit in a unified manner. The technique is based on a duality structure, which is present in many gradient flows and other variational evolutions, and whi ch often arises from a large-deviations principle. It has three main features: (A) a natural interaction between the duality structure and the coarse-graining, (B) application to systems with non-dissipative effects, and (C) application to coarse-graining of approximate solutions which solve the equation only to some error. As examples, we use this technique to solve three limit problems, the overdamped limit of the Vlasov-Fokker-Planck equation and the small-noise limit of randomly perturbed Hamiltonian systems with one and with many degrees of freedom.
A classic theorem in the theory of connections on principal fiber bundles states that the evaluation of all holonomy functions gives enough information to characterize the bundle structure (among those sharing the same structure group and base manifo ld) and the connection up to a bundle equivalence map. This result and other important properties of holonomy functions has encouraged their use as the primary ingredient for the construction of families of quantum gauge theories. However, in these applications often the set of holonomy functions used is a discrete proper subset of the set of holonomy functions needed for the characterization theorem to hold. We show that the evaluation of a discrete set of holonomy functions does not characterize the bundle and does not constrain the connection modulo gauge appropriately. We exhibit a discrete set of functions of the connection and prove that in the abelian case their evaluation characterizes the bundle structure (up to equivalence), and constrains the connection modulo gauge up to local details ignored when working at a given scale. The main ingredient is the Lie algebra valued curvature function $F_S (A)$ defined below. It covers the holonomy function in the sense that $exp{F_S (A)} = {rm Hol}(l= partial S, A)$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا