ترغب بنشر مسار تعليمي؟ اضغط هنا

Data-driven Clustering in Ad-hoc Networks based on Community Detection

115   0   0.0 ( 0 )
 نشر من قبل Shufan Huang
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

High demands for industrial networks lead to increasingly large sensor networks. However, the complexity of networks and demands for accurate data require better stability and communication quality. Conventional clustering methods for ad-hoc networks are based on topology and connectivity, leading to unstable clustering results and low communication quality. In this paper, we focus on two situations: time-evolving networks, and multi-channel ad-hoc networks. We model ad-hoc networks as graphs and introduce community detection methods to both situations. Particularly, in time-evolving networks, our method utilizes the results of community detection to ensure stability. By using similarity or human-in-the-loop measures, we construct a new weighted graph for final clustering. In multi-channel networks, we perform allocations from the results of multiplex community detection. Experiments on real-world datasets show that our method outperforms baselines in both stability and quality.

قيم البحث

اقرأ أيضاً

The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a data-driven fr amework to symbol detection design which combines machine learning (ML) and model-based algorithms. In this hybrid approach, well-known channel-model-based algorithms such as the Viterbi method, BCJR detection, and multiple-input multiple-output (MIMO) soft interference cancellation (SIC) are augmented with ML-based algorithms to remove their channel-model-dependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting data-driven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not well-capture the underlying physics. Our approach is unique in that it only replaces the channel-model-based computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship and in the presence of channel state information uncertainty.
In this paper, we investigate the sequence estimation problem of faster-than-Nyquist (FTN) signaling as a promising approach for increasing spectral efficiency (SE) in future communication systems. In doing so, we exploit the concept of Gaussian sepa rability and propose two probabilistic data association (PDA) algorithms with polynomial time complexity to detect binary phase-shift keying (BPSK) FTN signaling. Simulation results show that the proposed PDA algorithm outperforms the recently proposed SSSSE and SSSgb$K$SE algorithms for all SE values with a modest increase in complexity. The PDA algorithm approaches the performance of the semidefinite relaxation (SDRSE) algorithm for SE values of $0.96$ bits/sec/Hz, and it is within the $0.5$ dB signal-to-noise ratio (SNR) penalty at SE values of $1.10$ bits/sec/Hz for the fixed values of $beta = 0.3$.
Cognitive ad-hoc networks allow users to access an unlicensed/shared spectrum without the need for any coordination via a central controller and are being envisioned for futuristic ultra-dense wireless networks. The ad-hoc nature of networks require each user to learn and regularly update various network parameters such as channel quality and the number of users, and use learned information to improve the spectrum utilization and minimize collisions. For such a learning and coordination task, we propose a distributed algorithm based on a multi-player multi-armed bandit approach and novel signaling scheme. The proposed algorithm does not need prior knowledge of network parameters (users, channels) and its ability to detect as well as adapt to the changes in the network parameters thereby making it suitable for static as well as dynamic networks. The theoretical analysis and extensive simulation results validate the superiority of the proposed algorithm over existing state-of-the-art algorithms.
Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topo logy and lack of centralized monitoring point. It is important to search new architecture and mechanisms to protect the wireless networks and mobile computing application. IDS analyze the network activities by means of audit data and use patterns of well-known attacks or normal profile to detect potential attacks. There are two methods to analyze: misuse detection and anomaly detection. Misuse detection is not effective against unknown attacks and therefore, anomaly detection method is used. In this approach, the audit data is collected from each mobile node after simulating the attack and compared with the normal behavior of the system. If there is any deviation from normal behavior then the event is considered as an attack. Some of the features of collected audit data may be redundant or contribute little to the detection process. So it is essential to select the important features to increase the detection rate. This paper focuses on implementing two feature selection methods namely, markov blanket discovery and genetic algorithm. In genetic algorithm, bayesian network is constructed over the collected features and fitness function is calculated. Based on the fitness value the features are selected. Markov blanket discovery also uses bayesian network and the features are selected depending on the minimum description length. During the evaluation phase, the performances of both approaches are compared based on detection rate and false alarm rate.
Algorithms for Massive MIMO uplink detection typically rely on a centralized approach, by which baseband data from all antennas modules are routed to a central node in order to be processed. In case of Massive MIMO, where hundreds or thousands of ant ennas are expected in the base-station, this architecture leads to a bottleneck, with critical limitations in terms of interconnection bandwidth requirements. This paper presents a fully decentralized architecture and algorithms for Massive MIMO uplink based on recursive methods, which do not require a central node for the detection process. Through a recursive approach and very low complexity operations, the proposed algorithms provide a sequence of estimates that converge asymptotically to the zero-forcing solution, without the need of specific hardware for matrix inversion. The proposed solution achieves significantly lower interconnection data-rate than other architectures, enabling future scalability.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا