ترغب بنشر مسار تعليمي؟ اضغط هنا

Real Entropy Can Also Predict Daily Voice Traffic for Wireless Network Users

85   0   0.0 ( 0 )
 نشر من قبل Junyao Guo
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Voice traffic prediction is significant for network deployment optimization thus to improve the network efficiency. The real entropy based theorectical bound and corresponding prediction models have demonstrated their success in mobility prediction. In this paper, the real entropy based predictability analysis and prediction models are introduced into voice traffic prediction. For this adoption, the traffic quantification methods is proposed and discussed. Based on the real world voice traffic data, the prediction accuracy of N-order Markov models, diffusion based model and MF model are presented, among which, 25-order Markov models performs best and approach close to the maximum predictability. This work demonstrates that, the real entropy can also predict voice traffic well which broaden the understanding on the real entropy based prediction theory.

قيم البحث

اقرأ أيضاً

Thanks to its capability of classifying complex phenomena without explicit modeling, deep learning (DL) has been demonstrated to be a key enabler of Wireless Signal Classification (WSC). Although DL can achieve a very high accuracy under certain cond itions, recent research has unveiled that the wireless channel can disrupt the features learned by the DL model during training, thus drastically reducing the classification performance in real-world live settings. Since retraining classifiers is cumbersome after deployment, existing work has leveraged the usage of carefully-tailored Finite Impulse Response (FIR) filters that, when applied at the transmitters side, can restore the features that are lost because of the the channel actions, i.e., waveform synthesis. However, these approaches compute FIRs using offline optimization strategies, which limits their efficacy in highly-dynamic channel settings. In this paper, we improve the state of the art by proposing Chares, a Deep Reinforcement Learning (DRL)-based framework for channel-resilient adaptive waveform synthesis. Chares adapts to new and unseen channel conditions by optimally computing through DRL the FIRs in real-time. Chares is a DRL agent whose architecture is-based upon the Twin Delayed Deep Deterministic Policy Gradients (TD3), which requires minimal feedback from the receiver and explores a continuous action space. Chares has been extensively evaluated on two well-known datasets. We have also evaluated the real-time latency of Chares with an implementation on field-programmable gate array (FPGA). Results show that Chares increases the accuracy up to 4.1x when no waveform synthesis is performed, by 1.9x with respect to existing work, and can compute new actions within 41us.
This paper presents the design, implementation and evaluation of In-N-Out, a software-hardware solution for far-field wireless power transfer. In-N-Out can continuously charge a medical implant residing in deep tissues at near-optimal beamforming pow er, even when the implant moves around inside the human body. To accomplish this, we exploit the unique energy ball pattern of distributed antenna array and devise a backscatter-assisted beamforming algorithm that can concentrate RF energy on a tiny spot surrounding the medical implant. Meanwhile, the power levels on other body parts stay in low level, reducing the risk of overheating. We prototype In-N-Out on 21 software-defined radios and a printed circuit board (PCB). Extensive experiments demonstrate that In-N-Out achieves 0.37~mW average charging power inside a 10~cm-thick pork belly, which is sufficient to wirelessly power a range of commercial medical devices. Our head-to-head comparison with the state-of-the-art approach shows that In-N-Out achieves 5.4$times$--18.1$times$ power gain when the implant is stationary, and 5.3$times$--7.4$times$ power gain when the implant is in motion.
In this paper, we consider the problem of modelling the average delay in an IEEE 802.11 DCF wireless mesh network with a single root node under light traffic. We derive expression for mean delay for a co-located wireless mesh network, when packet gen eration is homogeneous Poisson process with rate lambda. We also show how our analysis can be extended for non-homogeneous Poisson packet generation. We model mean delay by decoupling queues into independent M/M/1 queues. Extensive simulations are conducted to verify the analytical results.
Ultra-dense deployments in 5G, the next generation of cellular networks, are an alternative to provide ultra-high throughput by bringing the users closer to the base stations. On the other hand, 5G deployments must not incur a large increase in energ y consumption in order to keep them cost-effective and most importantly to reduce the carbon footprint of cellular networks. We propose a reinforcement learning cell switching algorithm, to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed algorithm can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed algorithm can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex.
Wireless sensors and actuators offer benefits to large industrial control systems. The absence of wires for communication reduces the deployment cost, maintenance effort, and provides greater flexibility for sensor and actuator location and system ar chitecture. These benefits come at a cost of a high probability of communication delay or message loss due to the unreliability of radio-based communication. This unreliability poses a challenge to contemporary control systems that are designed with the assumption of instantaneous and reliable communication. Wireless sensors and actuators create a paradigm shift in engineering energy-efficient control schemes coupled with robust communication schemes that can maintain system stability in the face of unreliable communication. This paper investigates the feasibility of using the low-power wide-area communication protocol LoRaWAN with an event-triggered control scheme through modelling in Matlab. We show that LoRaWAN is capable of meeting the maximum delay and message loss requirements of an event-triggered controller for certain classes of applications. We also expose the limitation in the use of LoRaWAN when message size or communication range requirements increase or the underlying physical system is exposed to significant external disturbances.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا