ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust Clock Synchronization via Low Rank Approximation in Wireless Networks

100   0   0.0 ( 0 )
 نشر من قبل Osama Helaly Dr
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Clock synchronization has become a key design objective in wireless networks for its essential importance in many applications. However, as the wireless link is prone to random network delays due to unreliable channel conditions, it is in general difficult to achieve accurate clock synchronization among wireless nodes. This letter proposes robust clock synchronization algorithms based on low rank matrix approximation, which are able to correct timestamps in the presence of random network delays. We design a low rank approximation based maximum likelihood estimator (MLE) to jointly estimate the clock offset and clock skew under the two-way message exchange mechanism assuming Gaussian delay distribution. By formulating the timestamp correction problem into a low rank approximation problem, we can solve the problem in the singular value decomposition (SVD) domain and also via nuclear norm minimization. Numerical results show that the proposed schemes can correct noisy timestamps and thus achieve more robust synchronization performance than the MLE.



قيم البحث

اقرأ أيضاً

Clock synchronization and ranging over a wireless network with low communication overhead is a challenging goal with tremendous impact. In this paper, we study the use of time-to-digital converters in wireless sensors, which provides clock synchroniz ation and ranging at negligible communication overhead through a sawtooth signal model for round trip times between two nodes. In particular, we derive Cram{e}r-Rao lower bounds for a linearitzation of the sawtooth signal model, and we thoroughly evaluate simple estimation techniques by simulation, giving clear and concise performance references for this technology.
Time synchronization is important for a variety of applications in wireless sensor networks including scheduling communication resources, coordinating sensor wake/sleep cycles, and aligning signals for distributed transmission/reception. This paper d escribes a non-hierarchical approach to time synchronization in wireless sensor networks that has low overhead and can be implemented at the physical and/or MAC layers. Unlike most of the prior approaches, the approach described in this paper allows all nodes to use exactly the same distributed algorithm and does not require local averaging of measurements from other nodes. Analytical results show that the non-hierarchical approach can provide monotonic expected convergence of both drifts and offsets under broad conditions on the network topology and local clock update stepsize. Numerical results are also presented verifying the analysis under two particular network topologies.
A common technique for compressing a neural network is to compute the $k$-rank $ell_2$ approximation $A_{k,2}$ of the matrix $Ainmathbb{R}^{ntimes d}$ that corresponds to a fully connected layer (or embedding layer). Here, $d$ is the number of the ne urons in the layer, $n$ is the number in the next one, and $A_{k,2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$. This $ell_2$-approximation minimizes the sum over every entry to the power of $p=2$ in the matrix $A - A_{k,2}$, among every matrix $A_{k,2}inmathbb{R}^{ntimes d}$ whose rank is $k$. While it can be computed efficiently via SVD, the $ell_2$-approximation is known to be very sensitive to outliers (far-away rows). Hence, machine learning uses e.g. Lasso Regression, $ell_1$-regularization, and $ell_1$-SVM that use the $ell_1$-norm. This paper suggests to replace the $k$-rank $ell_2$ approximation by $ell_p$, for $pin [1,2]$. We then provide practical and provable approximation algorithms to compute it for any $pgeq1$, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. For example, our approach achieves $28%$ compression of RoBERTas embedding layer with only $0.63%$ additive drop in the accuracy (without fine-tuning) in average over all tasks in GLUE, compared to $11%$ drop using the existing $ell_2$-approximation. Open code is provided for reproducing and extending our results.
Synchronization and ranging in internet of things (IoT) networks are challenging due to the narrowband nature of signals used for communication between IoT nodes. Recently, several estimators for range estimation using phase difference of arrival (PD oA) measurements of narrowband signals have been proposed. However, these estimators are based on data models which do not consider the impact of clock-skew on the range estimation. In this paper, clock-skew and range estimation are studied under a unified framework. We derive a novel and precise data model for PDoA measurements which incorporates the unknown clock-skew effects. We then formulate joint estimation of the clock-skew and range as a two-dimensional (2-D) frequency estimation problem of a single complex sinusoid. Furthermore, we propose: (i) a two-way communication protocol for collecting PDoA measurements and (ii) a weighted least squares (WLS) algorithm for joint estimation of clock-skew and range leveraging the shift invariance property of the measurement data. Finally, through numerical experiments, the performance of the proposed protocol and estimator is compared against the Cramer Rao lower bound demonstrating that the proposed estimator is asymptotically efficient.
We study the problem of clock synchronization in highly dynamic networks, where communication links can appear or disappear at any time. The nodes in the network are equipped with hardware clocks, but the rate of the hardware clocks can vary arbitrar ily within specific bounds, and the estimates that nodes can obtain about the clock values of other nodes are inherently inaccurate. Our goal in this setting is to output a logical clock at each node such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than distant nodes. This property is called gradient clock synchronization. Gradient clock synchronization has been widely studied in the static setting, where the network topology does not change. We show that the asymptotically optimal bounds obtained for the static case also apply to our highly dynamic setting: if two nodes remain at distance $d$ from each other for sufficiently long, it is possible to upper bound the difference between their clock values by $O(d log (D / d))$, where $D$ is the diameter of the network. This is known to be optimal even for static networks. Furthermore, we show that our algorithm has optimal stabilization time: when a path of length $d$ appears between two nodes, the time required until the clock skew between the two nodes is reduced to $O(d log (D / d))$ is $O(D)$, which we prove to be optimal. Finally, the techniques employed for the more intricate analysis of the algorithm for dynamic graphs provide additional insights that are also of interest for the static setting. In particular, we establish self-stabilization of the gradient property within $O(D)$ time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا