ترغب بنشر مسار تعليمي؟ اضغط هنا

Enabling Millimeter-Wave 5G Networks for Massive IoT Applications

96   0   0.0 ( 0 )
 نشر من قبل Biswa P. S. Sahoo
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Internet of Things is one of the most promising technology of the fifth-generation (5G) mobile broadband systems. Data-driven wireless services of 5G systems require unprecedented capacity and availability. The millimeter-wave based wireless communication technologies are expected to play an essential role in future 5G systems. In this article, we describe the three broad categories of fifth-generation services, viz., enhanced mobile broadband, ultra-reliable and low-latency communications, and massive machine-type communications. Furthermore, we introduce the potential issues of consumer devices under a unifying 5G framework. We provide the state-of-the-art overview with an emphasis on technical challenges when applying millimeter-wave (mmWave) technology to support the massive Internet of Things applications. Our discussion highlights the challenges and solutions, particularly for communication/computation requirements in consumer devices under the millimeter-wave 5G framework.



قيم البحث

اقرأ أيضاً

Millimeter-wave (mmWave) frequency bands offer a new frontier for next-generation wireless networks, popularly known as 5G, to enable multi-gigabit communication; however, the availability and reliability of mmWave signals are significantly limited d ue to its unfavorable propagation characteristics. Thus, mmWave networks rely on directional narrow-beam transmissions to overcome severe path-loss. To mitigate the impact of transmission-reception directionality and provide uninterrupted network services, ensuring the availability of mmWave transmission links is important. In this paper, we proposed a new flexible network architecture to provide efficient resource coordination among serving basestations during user mobility. The key idea of this holistic architecture is to exploit the software-defined networking (SDN) technology with mmWave communication to provide a flexible and resilient network architecture. Besides, this paper presents an efficient and seamless uncoordinated network operation to support reliable communication in highly-dynamic environments characterized by high density and mobility of wireless devices. To warrant high-reliability and guard against the potential radio link failure, we introduce a new transmission framework to ensure that there is at least one basestation is connected to the UE at all times. We validate the proposed transmission scheme through simulations.
Next generation Wi-Fi networks are expected to support real-time applications that impose strict requirements on the packet transmission delay and packet loss ratio. Such applications form an essential target for the future Wi-Fi standard, namely IEE E 802.11be, the development process of which started in 2019. A promising way to provide efficient real-time communications in 802.11be networks requires some modification of the uplink OFDMA feature originally introduced in the IEEE 802.11ax amendment to the Wi-Fi standard. This feature allows the access point to reserve channel resources for upcoming urgent transmissions. The paper explains why uplink OFDMA random access of 802.11ax does not perfectly fit the requirements of real-time applications and proposes an easy-to-implement modification of the channel access rules for future 802.11be networks. With extensive simulation, it is shown that this modification together with a new resource allocation algorithm outperforms the existing ways to support real-time applications, especially for a heavy load and a high number of users. In particular, they provide extremely low delays for real-time traffic, while the throughput for non-real-time traffic is reduced insignificantly.
In this paper, a novel framework is proposed for channel charting (CC)-aided localization in millimeter wave networks. In particular, a convolutional autoencoder model is proposed to estimate the three-dimensional location of wireless user equipment (UE), based on multipath channel state information (CSI), received by different base stations. In order to learn the radio-geometry map and capture the relative position of each UE, an autoencoder-based channel chart is constructed in an unsupervised manner, such that neighboring UEs in the physical space will remain close in the channel chart. Next, the channel charting model is extended to a semi-supervised framework, where the autoencoder is divided into two components: an encoder and a decoder, and each component is optimized individually, using the labeled CSI dataset with associated location information, to further improve positioning accuracy. Simulation results show that the proposed CC-aided semi-supervised localization yields a higher accuracy, compared with existing supervised positioning and conventional unsupervised CC approaches.
Virtual Reality (VR) has shown great potential to revolutionize the market by providing users immersive experiences with freedom of movement. Compared to traditional video streaming, VR is with ultra high-definition and dynamically changes with users head and eye movements, which poses significant challenges for the realization of such potential. In this paper, we provide a detailed and systematic survey of enabling technologies of virtual reality and its applications in Internet of Things (IoT). We identify major challenges of virtual reality on system design, view prediction, computation, streaming, and quality of experience evaluation. We discuss each of them by extensively surveying and reviewing related papers in the recent years. We also introduce several use cases of VR for IoT. Last, issues and future research directions are also identified and discussed.
In this paper, we aim at interference mitigation in 5G millimeter-Wave (mm-Wave) communications by employing beamforming and Non-Orthogonal Multiple Access (NOMA) techniques with the aim of improving networks aggregate rate. Despite the potential cap acity gains of mm-Wave and NOMA, many technical challenges might hinder that performance gain. In particular, the performance of Successive Interference Cancellation (SIC) diminishes rapidly as the number of users increases per beam, which leads to higher intra-beam interference. Furthermore, intersection regions between adjacent cells give rise to inter-beam inter-cell interference. To mitigate both interference levels, optimal selection of the number of beams in addition to best allocation of users to those beams is essential. In this paper, we address the problem of joint user-cell association and selection of number of beams for the purpose of maximizing the aggregate network capacity. We propose three machine learning-based algorithms; transfer Q-learning (TQL), Q-learning, and Best SINR association with Density-based Spatial Clustering of Applications with Noise (BSDC) algorithms and compare their performance under different scenarios. Under mobility, TQL and Q-learning demonstrate 12% rate improvement over BSDC at the highest offered traffic load. For stationary scenarios, Q-learning and BSDC outperform TQL, however TQL achieves about 29% convergence speedup compared to Q-learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا