ترغب بنشر مسار تعليمي؟ اضغط هنا

Modeling the resilience of large and evolving systems

154   0   0.0 ( 0 )
 نشر من قبل Mohamed Kaaniche
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Mohamed Kaaniche




اسأل ChatGPT حول البحث

This paper summarizes the state of knowledge and ongoing research on methods and techniques for resilience evaluation, taking into account the resilience-scaling challenges and properties related to the ubiquitous computerized systems. We mainly focus on quantitative evaluation approaches and, in particular, on model-based evaluation techniques that are commonly used to evaluate and compare, from the dependability point of view, different architecture alternatives at the design stage. We outline some of the main modeling techniques aiming at mastering the largeness of analytical dependability models at the construction level. Actually, addressing the model largeness problem is important with respect to the investigation of the scalability of current techniques to meet the complexity challenges of ubiquitous systems. Finally we present two case studies in which some of the presented techniques are applied for modeling web services and General Packet Radio Service (GPRS) mobile telephone networks, as prominent examples of large and evolving systems.

قيم البحث

اقرأ أيضاً

The paper refers to CRUTIAL, CRitical UTility InfrastructurAL Resilience, a European project within the research area of Critical Information Infrastructure Protection, with a specific focus on the infrastructures operated by power utilities, widely recognized as fundamental to national and international economy, security and quality of life. Such infrastructures faced with the recent market deregulations and the multiple interdependencies with other infrastructures are becoming more and more vulnerable to various threats, including accidental failures and deliberate sabotage and malicious attacks. The subject of CRUTIAL research are small scale networked ICT systems used to control and manage the electric power grid, in which artifacts controlling the physical process of electricity transportation need to be connected with corporate and societal applications performing management and maintenance functionality. The peculiarity of such ICT-supported systems is that they are related to the power system dynamics and its emergency conditions. Specific effort need to be devoted by the Electric Power community and by the Information Technology community to influence the technological progress in order to allow commercial intelligent electronic devices to be effectively deployed for the protection of citizens against cyber threats to electric power management and control systems. A well-founded know-how needs to be built inside the industrial power sector to allow all the involved stakeholders to achieve their service objectives without compromising the resilience properties of the logical and physical assets that support the electric power provision.
Cities are complex systems comprised of socioeconomic systems relying on critical services delivered by multiple physical infrastructure networks. Due to interdependencies between social and physical systems, disruptions caused by natural hazards may cascade across systems, amplifying the impact of disasters. Despite the increasing threat posed by climate change and rapid urban growth, how to design interdependencies between social and physical systems to achieve resilient cities have been largely unexplored. Here, we study the socio-physical interdependencies in urban systems and their effects on disaster recovery and resilience, using large-scale mobility data collected from Puerto Rico during Hurricane Maria. We find that as cities grow in scale and expand their centralized infrastructure systems, the recovery efficiency of critical services improves, however, curtails the self-reliance of socio-economic systems during crises. Results show that maintaining self-reliance among social systems could be key in developing resilient urban socio-physical systems for cities facing rapid urban growth.
We consider the load balancing problem in large-scale heterogeneous systems with multiple dispatchers. We introduce a general framework called Local-Estimation-Driven (LED). Under this framework, each dispatcher keeps local (possibly outdated) estima tes of queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. The local estimates are updated via infrequent communications between dispatchers and servers. We derive sufficient conditions for LED policies to achieve throughput optimality and delay optimality in heavy-traffic, respectively. These conditions directly imply delay optimality for many previous local-memory based policies in heavy traffic. Moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. Finally, the heavy-traffic delay optimality of the LED framework directly resolves a recent open problem on how to design optimal load balancing schemes using delayed information.
The A64FX CPU is arguably the most powerful Arm-based processor design to date. Although it is a traditional cache-based multicore processor, its peak performance and memory bandwidth rival accelerator devices. A good understanding of its performance features is of paramount importance for developers who wish to leverage its full potential. We present an architectural analysis of the A64FX used in the Fujitsu FX1000 supercomputer at a level of detail that allows for the construction of Execution-Cache-Memory (ECM) performance models for steady-state loops. In the process we identify architectural peculiarities that point to viable generic optimization strategies. After validating the model using simple streaming loops we apply the insight gained to sparse matrix-vector multiplication (SpMV) and the domain wall (DW) kernel from quantum chromodynamics (QCD). For SpMV we show why the CRS matrix storage format is not a good practical choice on this architecture and how the SELL-C-sigma format can achieve bandwidth saturation. For the DW kernel we provide a cache-reuse analysis and show how an appropriate choice of data layout for complex arrays can realize memory-bandwidth saturation in this case as well. A comparison with state-of-the-art high-end Intel Cascade Lake AP and Nvidia V100 systems puts the capabilities of the A64FX into perspective. We also explore the potential for power optimizations using the tuning knobs provided by the Fugaku system, achieving energy savings of about 31% for SpMV and 18% for DW.
154 - Mohamed Kaaniche 2007
Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurr{e}.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا