Do you want to publish a course? Click here

Security and privacy for 6G: A survey on prospective technologies and challenges

327   0   0.0 ( 0 )
 Added by Van Linh Nguyen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Sixth-generation (6G) mobile networks will have to cope with diverse threats on a space-air-ground integrated network environment, novel technologies, and an accessible user information explosion. However, for now, security and privacy issues for 6G remain largely in concept. This survey provides a systematic overview of security and privacy issues based on prospective technologies for 6G in the physical, connection, and service layers, as well as through lessons learned from the failures of existing security architectures and state-of-the-art defenses. Two key lessons learned are as follows. First, other than inheriting vulnerabilities from the previous generations, 6G has new threat vectors from new radio technologies, such as the exposed location of radio stripes in ultra-massive MIMO systems at Terahertz bands and attacks against pervasive intelligence. Second, physical layer protection, deep network slicing, quantum-safe communications, artificial intelligence (AI) security, platform-agnostic security, real-time adaptive security, and novel data protection mechanisms such as distributed ledgers and differential privacy are the top promising techniques to mitigate the attack magnitude and personal data breaches substantially.



rate research

Read More

The roles of trust, security and privacy are somewhat interconnected, but different facets of next generation networks. The challenges in creating a trustworthy 6G are multidisciplinary spanning technology, regulation, techno-economics, politics and ethics. This white paper addresses their fundamental research challenges in three key areas. Trust: Under the current open internet regulation, the telco cloud can be used for trust services only equally for all users. 6G network must support embedded trust for increased level of information security in 6G. Trust modeling, trust policies and trust mechanisms need to be defined. 6G interlinks physical and digital worlds making safety dependent on information security. Therefore, we need trustworthy 6G. Security: In 6G era, the dependence of the economy and societies on IT and the networks will deepen. The role of IT and the networks in national security keeps rising - a continuation of what we see in 5G. The development towards cloud and edge native infrastructures is expected to continue in 6G networks, and we need holistic 6G network security architecture planning. Security automation opens new questions: machine learning can be used to make safer systems, but also more dangerous attacks. Physical layer security techniques can also represent efficient solutions for securing less investigated network segments as first line of defense. Privacy: There is currently no way to unambiguously determine when linked, deidentified datasets cross the threshold to become personally identifiable. Courts in different parts of the world are making decisions about whether privacy is being infringed, while companies are seeking new ways to exploit private data to create new business revenues. As solution alternatives, we may consider blockchain, distributed ledger technologies and differential privacy approaches.
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.
For many decades, research in speech technologies has focused upon improving reliability. With this now meeting user expectations for a range of diverse applications, speech technology is today omni-present. As result, a focus on security and privacy has now come to the fore. Here, the research effort is in its relative infancy and progress calls for greater, multidisciplinary collaboration with security, privacy, legal and ethical experts among others. Such collaboration is now underway. To help catalyse the efforts, this paper provides a high-level overview of some related research. It targets the non-speech audience and describes the benchmarking methodology that has spearheaded progress in traditional research and which now drives recent security and privacy initiatives related to voice biometrics. We describe: the ASVspoof challenge relating to the development of spoofing countermeasures; the VoicePrivacy initiative which promotes research in anonymisation for privacy preservation.
This paper embodies the usage of Big Data in Healthcare. It is important to note that big data in terms of Architecture and implementation might be or has already or will continue to assist the continuous growth in the field of healthcare. The main important aspects of this study are the general importance of big data in healthcare, the positives big data will help tackle and enhance in this field and not to also forget to mention the tremendous downside big data has on healthcare that is still needed to improve or putting extensive research on. We believe there is still a long way in which institutions and individuals understand the hidden truth about big data. We have highlighted the various ways one could be confidently relied on big data and on the other hand highlighted the weighted importance of big problem big data and expected solutions.
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا