ترغب بنشر مسار تعليمي؟ اضغط هنا

Information sharing is vital in resisting cyberattacks, and the volume and severity of these attacks is increasing very rapidly. Therefore responders must triage incoming warnings in deciding how to act. This study asked a very specific question: how can the addition of confidence information to alerts and warnings improve overall resistance to cyberattacks. We sought, in particular, to identify current practices, and if possible, to identify some best practices. The research involved literature review and interviews with subject matter experts at every level from system administrators to persons who develop broad principles of policy. An innovative Modified Online Delphi Panel technique was used to elicit judgments and recommendations from experts who were able to speak with each other and vote anonymously to rank proposed practices.
What makes a task relatively more or less difficult for a machine compared to a human? Much AI/ML research has focused on expanding the range of tasks that machines can do, with a focus on whether machines can beat humans. Allowing for differences in scale, we can seek interesting (anomalous) pairs of tasks T, T. We define interesting in this way: The harder to learn relation is reversed when comparing human intelligence (HI) to AI. While humans seems to be able to understand problems by formulating rules, ML using neural networks does not rely on constructing rules. We discuss a novel approach where the challenge is to perform well under rules that have been created by human beings. We suggest that this provides a rigorous and precise pathway for understanding the difference between the two kinds of learning. Specifically, we suggest a large and extensible class of learning tasks, formulated as learning under rules. With these tasks, both the AI and HI will be studied with rigor and precision. The immediate goal is to find interesting groundtruth rule pairs. In the long term, the goal will be to understand, in a generalizable way, what distinguishes interesting pairs from ordinary pairs, and to define saliency behind interesting pairs. This may open new ways of thinking about AI, and provide unexpected insights into human learning.
45 - Paul B. Kantor 2019
We consider the problem of eliciting expert assessments of an uncertain parameter. The context is risk control, where there are, in fact, three uncertain parameters to be estimates. Two of these are probabilities, requiring the that the experts be gu ided in the concept of uncertainty about uncertainty. We propose a novel formulation for expert estimates, which relies on the range and the median, rather than the variance and the mean. We discuss the process of elicitation, and provide precise formulas for these new distributions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا