ترغب بنشر مسار تعليمي؟ اضغط هنا

Can We Prove Time Protection?

85   0   0.0 ( 0 )
 نشر من قبل Gernot Heiser
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Timing channels are a significant and growing security threat in computer systems, with no established solution. We have recently argued that the OS must provide time protection, in analogy to the established memory protection, to protect applications from information leakage through timing channels. Based on a recently-proposed implementation of time protection in the seL4 microkernel, we investigate how such an implementation could be formally proved to prevent timing channels. We postulate that this should be possible by reasoning about a highly abstracted representation of the shared hardware resources that cause timing channels.

قيم البحث

اقرأ أيضاً

Despite the attempts of well-designed anonymous communication tools to protect users from tracking or identification, flaws in surrounding software (such as web browsers) and mistakes in configuration may leak the users identity. We introduce Nymix, an anonymity-centric operating system architecture designed top-to-bottom to strengthen identity- and tracking-protection. Nymixs core contribution is OS support for nym-browsing: independent, parallel, and ephemeral web sessions. Each web session, or pseudonym, runs in a unique virtual machine (VM) instance evolving from a common base state with support for long-lived sessions which can be anonymously stored to the cloud, avoiding de-anonymization despite potential confiscation or theft. Nymix allows a user to safely browse the Web using various different transports simultaneously through a pluggable communication model that supports Tor, Dissent, and a private browsing mode. In evaluations, Nymix consumes 600 MB per nymbox and loads within 15 to 25 seconds.
69 - Michael Smithson 2019
It is possible to obtain a large Bayes Factor (BF) favoring the null hypothesis when both the null and alternative hypotheses have low likelihoods, and there are other hypotheses being ignored that are much more strongly supported by the data. As sam ple sizes become large it becomes increasingly probable that a strong BF favouring a point null against a conventional Bayesian vague alternative co-occurs with a BF favouring various specific alternatives against the null. For any BF threshold q and sample mean, there is a value n such that sample sizes larger than n guarantee that although the BF comparing H0 against a conventional (vague) alternative exceeds q, nevertheless for some range of hypothetical {mu}, a BF comparing H0 against {mu} in that range falls below 1/q. This paper discusses the conditions under which this conundrum occurs and investigates methods for resolving it.
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms.
Learning to solve diagrammatic reasoning (DR) can be a challenging but interesting problem to the computer vision research community. It is believed that next generation pattern recognition applications should be able to simulate human brain to under stand and analyze reasoning of images. However, due to the lack of benchmarks of diagrammatic reasoning, the present research primarily focuses on visual reasoning that can be applied to real-world objects. In this paper, we present a diagrammatic reasoning dataset that provides a large variety of DR problems. In addition, we also propose a Knowledge-based Long Short Term Memory (KLSTM) to solve diagrammatic reasoning problems. Our proposed analysis is arguably the first work in this research area. Several state-of-the-art learning frameworks have been used to compare with the proposed KLSTM framework in the present context. Preliminary results indicate that the domain is highly related to computer vision and pattern recognition research with several challenging avenues.
Consensus methods are widely used for combining phylogenetic trees into a single estimate of the evolutionary tree for a group of species. As more taxa are added, the new source trees may begin to tell a different evolutionary story when restricted t o the original set of taxa. However, if the new trees, restricted to the original set of taxa, were to agree exactly with the earlier trees, then we might hope that their consensus would either agree with or resolve the original consensus tree. In this paper, we ask under what conditions consensus methods exist that are future proof in this sense. While we show that some methods (e.g. Adams consensus) have this property for specific types of input, we also establish a rather surprising `no-go theorem: there is no reasonable consensus method that satisfies the future-proofing property in general. We then investigate a second notion of future proofing for consensus methods, in which trees (rather than taxa) are added, and establish some positive and negative results. We end with some questions for future work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا