Do you want to publish a course? Click here

6G White paper: Research challenges for Trust, Security and Privacy

76   0   0.0 ( 0 )
 Added by Mika Ylianttila
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The roles of trust, security and privacy are somewhat interconnected, but different facets of next generation networks. The challenges in creating a trustworthy 6G are multidisciplinary spanning technology, regulation, techno-economics, politics and ethics. This white paper addresses their fundamental research challenges in three key areas. Trust: Under the current open internet regulation, the telco cloud can be used for trust services only equally for all users. 6G network must support embedded trust for increased level of information security in 6G. Trust modeling, trust policies and trust mechanisms need to be defined. 6G interlinks physical and digital worlds making safety dependent on information security. Therefore, we need trustworthy 6G. Security: In 6G era, the dependence of the economy and societies on IT and the networks will deepen. The role of IT and the networks in national security keeps rising - a continuation of what we see in 5G. The development towards cloud and edge native infrastructures is expected to continue in 6G networks, and we need holistic 6G network security architecture planning. Security automation opens new questions: machine learning can be used to make safer systems, but also more dangerous attacks. Physical layer security techniques can also represent efficient solutions for securing less investigated network segments as first line of defense. Privacy: There is currently no way to unambiguously determine when linked, deidentified datasets cross the threshold to become personally identifiable. Courts in different parts of the world are making decisions about whether privacy is being infringed, while companies are seeking new ways to exploit private data to create new business revenues. As solution alternatives, we may consider blockchain, distributed ledger technologies and differential privacy approaches.



rate research

Read More

Sixth-generation (6G) mobile networks will have to cope with diverse threats on a space-air-ground integrated network environment, novel technologies, and an accessible user information explosion. However, for now, security and privacy issues for 6G remain largely in concept. This survey provides a systematic overview of security and privacy issues based on prospective technologies for 6G in the physical, connection, and service layers, as well as through lessons learned from the failures of existing security architectures and state-of-the-art defenses. Two key lessons learned are as follows. First, other than inheriting vulnerabilities from the previous generations, 6G has new threat vectors from new radio technologies, such as the exposed location of radio stripes in ultra-massive MIMO systems at Terahertz bands and attacks against pervasive intelligence. Second, physical layer protection, deep network slicing, quantum-safe communications, artificial intelligence (AI) security, platform-agnostic security, real-time adaptive security, and novel data protection mechanisms such as distributed ledgers and differential privacy are the top promising techniques to mitigate the attack magnitude and personal data breaches substantially.
The advent of miniature biosensors has generated numerous opportunities for deploying wireless sensor networks in healthcare. However, an important barrier is that acceptance by healthcare stakeholders is influenced by the effectiveness of privacy safeguards for personal and intimate information which is collected and transmitted over the air, within and beyond these networks. In particular, these networks are progressing beyond traditional sensors, towards also using multimedia sensors, which raise further privacy concerns. Paradoxically, less research has addressed privacy protection, compared to security. Nevertheless, privacy protection has gradually evolved from being assumed an implicit by-product of security measures, and it is maturing into a research concern in its own right. However, further technical and socio-technical advances are needed. As a contribution towards galvanising further research, the hallmarks of this paper include: (i) a literature survey explicitly anchored on privacy preservation, it is underpinned by untangling privacy goals from security goals, to avoid mixing privacy and security concerns, as is often the case in other papers; (ii) a critical survey of privacy preservation services for wireless sensor networks in healthcare, including threat analysis and assessment methodologies; it also offers classification trees for the multifaceted challenge of privacy protection in healthcare, and for privacy threats, attacks and countermeasures; (iii) a discussion of technical advances complemented by reflection over the implications of regulatory frameworks; (iv) a discussion of open research challenges, leading onto offers of directions for future research towards unlocking the door onto privacy protection which is appropriate for healthcare in the twenty-first century.
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.
For many decades, research in speech technologies has focused upon improving reliability. With this now meeting user expectations for a range of diverse applications, speech technology is today omni-present. As result, a focus on security and privacy has now come to the fore. Here, the research effort is in its relative infancy and progress calls for greater, multidisciplinary collaboration with security, privacy, legal and ethical experts among others. Such collaboration is now underway. To help catalyse the efforts, this paper provides a high-level overview of some related research. It targets the non-speech audience and describes the benchmarking methodology that has spearheaded progress in traditional research and which now drives recent security and privacy initiatives related to voice biometrics. We describe: the ASVspoof challenge relating to the development of spoofing countermeasures; the VoicePrivacy initiative which promotes research in anonymisation for privacy preservation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا