ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper studies the transmit beamforming in a downlink integrated sensing and communication (ISAC) system, where a base station (BS) equipped with a uniform linear array (ULA) sends combined information-bearing and dedicated radar signals to simul taneously perform downlink multiuser communication and radar target sensing. Under this setup, we maximize the radar sensing performance (in terms of minimizing the beampattern matching errors or maximizing the minimum beampattern gains), subject to the communication users minimum signal-to-interference-plus-noise ratio (SINR) requirements and the BSs transmit power constraints. In particular, we consider two types of communication receivers, namely Type-I and Type-II receivers, which do not have and do have the capability of cancelling the interference from the {emph{a-priori}} known dedicated radar signals, respectively. Under both Type-I and Type-II receivers, the beampattern matching and minimum beampattern gain maximization problems are globally optimally solved via applying the semidefinite relaxation (SDR) technique together with the rigorous proof of the tightness of SDR for both Type-I and Type-II receivers under the two design criteria. It is shown that at the optimality, dedicated radar signals are not required with Type-I receivers under some specific conditions, while dedicated radar signals are always needed to enhance the performance with Type-II receivers. Numerical results show that the minimum beampattern gain maximization leads to significantly higher beampattern gains at the worst-case sensing angles with a much lower computational complexity than the beampattern matching design. It is also shown that by exploiting the capability of canceling the interference caused by the radar signals, the case with Type-II receivers results in better sensing performance than that with Type-I receivers and other conventional designs.
Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completel y independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing (LS) can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.
With the emergence of 4k/8k video, the throughput requirement of video delivery will keep grow to tens of Gbps. Other new high-throughput and low-latency video applications including augmented reality (AR), virtual reality (VR), and online gaming, ar e also proliferating. Due to the related stringent requirements, supporting these applications over wireless local area network (WLAN) is far beyond the capabilities of the new WLAN standard -- IEEE 802.11ax. To meet these emerging demands, the IEEE 802.11 will release a new amendment standard IEEE 802.11be -- Extremely High Throughput (EHT), also known as Wireless-Fidelity (Wi-Fi) 7. This article provides the comprehensive survey on the key medium access control (MAC) layer techniques and physical layer (PHY) techniques being discussed in the EHT task group, including the channelization and tone plan, multiple resource units (multi-RU) support, 4096 quadrature amplitude modulation (4096-QAM), preamble designs, multiple link operations (e.g., multi-link aggregation and channel access), multiple input multiple output (MIMO) enhancement, multiple access point (multi-AP) coordination (e.g., multi-AP joint transmission), enhanced link adaptation and retransmission protocols (e.g., hybrid automatic repeat request (HARQ)). This survey covers both the critical technologies being discussed in EHT standard and the related latest progresses from worldwide research. Besides, the potential developments beyond EHT are discussed to provide some possible future research directions for WLAN.
More and more emerging Internet of Things (IoT) applications involve status updates, where various IoT devices monitor certain physical processes and report their latest statuses to the relevant information fusion nodes. A new performance measure, te rmed the age of information (AoI), has recently been proposed to quantify the information freshness in time-critical IoT applications. Due to a large number of devices in future IoT networks, the decentralized channel access protocols (e.g. random access) are preferable thanks to their low network overhead. Built on the AoI concept, some recent efforts have developed several AoI-oriented ALOHA-like random access protocols for boosting the network-wide information freshness. However, all relevant works focused on theoretical designs and analysis. The development and implementation of a working prototype to evaluate and further improve these random access protocols in practice have been largely overlooked. Motivated as such, we build a software-defined radio (SDR) prototype for testing and comparing the performance of recently proposed AoI-oriented random access protocols. To this end, we implement a time-slotted wireless system by devising a simple yet effective over-the-air time synchronization scheme, in which beacons that serve as reference timing packets are broadcast by an access point from time to time. For a complete working prototype, we also design the frame structures of various packets exchanged within the system. Finally, we design a set of experiments, implement them on our prototype and test the considered algorithms in an office environment.
We propose a general, very fast method to quickly approximate the solution of a parabolic Partial Differential Equation (PDEs) with explicit formulas. Our method also provides equaly fast approximations of the derivatives of the solution, which is a challenge for many other methods. Our approach is based on a computable series expansion in terms of a small parameter. As an example, we treat in detail the important case of the SABR PDE for $beta = 1$, namely $partial_{tau}u = sigma^2 big [ frac{1}{2} (partial^2_xu - partial_xu) + u rho partial_xpartial_sigma u + frac{1}{2} u^2 partial^2_sigma u , big ] + kappa (theta - sigma) partial_sigma$, by choosing $ u$ as small parameter. This yields $u = u_0 + u u_1 + u^2 u_2 + ldots$, with $u_j$ independent of $ u$. The terms $u_j$ are explicitly computable, which is also a challenge for many other, related methods. Truncating this expansion leads to computable approximations of $u$ that are in closed form, and hence can be evaluated very quickly. Most of the other related methods use the time $tau$ as a small parameter. The advantage of our method is that it leads to shorter and hence easier to determine and to generalize formulas. We obtain also an explicit expansion for the implied volatility in the SABR model in terms of $ u$, similar to Hagans formula, but including also the {em mean reverting term.} We provide several numerical tests that show the performance of our method. In particular, we compare our formula to the one due to Hagan. Our results also behave well when used for actual market data and show the mean reverting property of the volatility.
432 - Pei Zhou , Kaijun Cheng , Xiao Han 2018
Millimeter-wave (mmWave) with large spectrum available is considered as the most promising frequency band for future wireless communications. The IEEE 802.11ad and IEEE 802.11ay operating on 60 GHz mmWave are the two most expected wireless local area network (WLAN) technologies for ultra-high-speed communications. For the IEEE 802.11ay standard still under development, there are plenty of proposals from companies and researchers who are involved with the IEEE 802.11ay task group. In this survey, we conduct a comprehensive review on the medium access control layer (MAC) related issues for the IEEE 802.11ay, some cross-layer between physical layer (PHY) and MAC technologies are also included. We start with MAC related technologies in the IEEE 802.11ad and discuss design challenges on mmWave communications, leading to some MAC related technologies for the IEEE 802.11ay. We then elaborate on important design issues for IEEE 802.11ay. Specifically, we review the channel bonding and aggregation for the IEEE 802.11ay, and point out the major differences between the two technologies. Then, we describe channel access and channel allocation in the IEEE 802.11ay, including spatial sharing and interference mitigation technologies. After that, we present an in-depth survey on beamforming training (BFT), beam tracking, single-user multiple-input-multiple-output (SU-MIMO) beamforming and multi-user multiple-input-multiple-output (MU-MIMO) beamforming. Finally, we discuss some open design issues and future research directions for mmWave WLANs. We hope that this paper provides a good introduction to this exciting research area for future wireless systems.
We study the asymptotic distributions of the spiked eigenvalues and the largest nonspiked eigenvalue of the sample covariance matrix under a general covariance matrix model with divergent spiked eigenvalues, while the other eigenvalues are bounded bu t otherwise arbitrary. The limiting normal distribution for the spiked sample eigenvalues is established. It has distinct features that the asymptotic mean relies on not only the population spikes but also the nonspikes and that the asymptotic variance in general depends on the population eigenvectors. In addition, the limiting Tracy-Widom law for the largest nonspiked sample eigenvalue is obtained. Estimation of the number of spikes and the convergence of the leading eigenvectors are also considered. The results hold even when the number of the spikes diverges. As a key technical tool, we develop a Central Limit Theorem for a type of random quadratic forms where the random vectors and random matrices involved are dependent. This result can be of independent interest.
Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combinin g the controllability theory for complex networks and compressive sensing, we develop a framework with high efficiency and robustness for optimal source localization in arbitrary weighted networks with arbitrary distribution of sources. We offer a minimum output analysis to quantify the source locatability through a minimal number of messenger nodes that produce sufficient measurement for fully locating the sources. When the minimum messenger nodes are discerned, the problem of optimal source localization becomes one of sparse signal reconstruction, which can be solved using compressive sensing. Application of our framework to model and empirical networks demonstrates that sources in homogeneous and denser networks are more readily to be located. A surprising finding is that, for a connected undirected network with random link weights and weak noise, a single messenger node is sufficient for locating any number of sources. The framework deepens our understanding of the network source localization problem and offers efficient tools with broad applications.
Let $bbZ_{M_1times N}=bbT^{frac{1}{2}}bbX$ where $(bbT^{frac{1}{2}})^2=bbT$ is a positive definite matrix and $bbX$ consists of independent random variables with mean zero and variance one. This paper proposes a unified matrix model $$bold{bbom}=(bbZ bbU_2bbU_2^TbbZ^T)^{-1}bbZbbU_1bbU_1^TbbZ^T,$$ where $bbU_1$ and $bbU_2$ are isometric with dimensions $Ntimes N_1$ and $Ntimes (N-N_2)$ respectively such that $bbU_1^TbbU_1=bbI_{N_1}$, $bbU_2^TbbU_2=bbI_{N-N_2}$ and $bbU_1^TbbU_2=0$. Moreover, $bbU_1$ and $bbU_2$ (random or non-random) are independent of $bbZ_{M_1times N}$ and with probability tending to one, $rank(bbU_1)=N_1$ and $rank(bbU_2)=N-N_2$. We establish the asymptotic Tracy-Widom distribution for its largest eigenvalue under moment assumptions on $bbX$ when $N_1,N_2$ and $M_1$ are comparable. By selecting appropriate matrices $bbU_1$ and $bbU_2$, the asymptotic distributions of the maximum eigenvalues of the matrices used in Canonical Correlation Analysis (CCA) and of F matrices (including centered and non-center
The idea that the success rate of a team increases when playing home is broadly accepted and documented for a wide variety of sports. Investigations on the so-called home advantage phenomenon date back to the 70s and every since has attracted the att ention of scholars and sport enthusiasts. These studies have been mainly focused on identifying the phenomenon and trying to correlate it with external factors such as crowd noise and referee bias. Much less is known about the effects of home advantage in the microscopic dynamics of the game (within the game) or possible team-specific and evolving features of this phenomenon. Here we present a detailed study of these previous features in the National Basketball Association (NBA). By analyzing play-by-play events of more than sixteen thousand games that span thirteen NBA seasons, we have found that home advantage affects the microscopic dynamics of the game by increasing the scoring rates and decreasing the time intervals between scores of teams playing home. We verified that these two features are different among the NBA teams, for instance, the scoring rate of the Cleveland Cavaliers team is increased 0.16 points per minute (on average the seasons 2004-05 to 2013-14) when playing home, whereas for the New Jersey Nets (now the Brooklyn Nets) this rate increases in only 0.04 points per minute. We further observed that these microscopic features have evolved over time in a non-trivial manner when analyzing the results team-by-team. However, after averaging over all teams some regularities emerge; in particular, we noticed that the average differences in the scoring rates and in the characteristic times (related to the time intervals between scores) have slightly decreased over time, suggesting a weakening of the phenomenon.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا