ترغب بنشر مسار تعليمي؟ اضغط هنا

One-time tables are a class of two-party correlations that can help achieve information-theoretically secure two-party (interactive) classical or quantum computation. In this work we propose a bipartite quantum protocol for generating a simple type o f one-time tables (the correlation in the Popescu-Rohrlich nonlocal box) with partial security. We then show that by running many instances of the first protocol and performing checks on some of them, asymptotically information-theoretically secure generation of one-time tables can be achieved. The first protocol is adapted from a protocol for semi-honest oblivious transfer, with some changes so that no entangled state needs to be prepared, and the communication involves only one qutrit in each direction. We show that some information tradeoffs in the first protocol are similar to that in the semi-honest oblivious transfer protocol. We also obtain two types of inequalities about guessing probabilities in some protocols for generating one-time tables, from a single type of inequality about guessing probabilities in semi-honest oblivious transfer protocols.
We present a multi-camera visual-inertial odometry system based on factor graph optimization which estimates motion by using all cameras simultaneously while retaining a fixed overall feature budget. We focus on motion tracking in challenging environ ments such as in narrow corridors and dark spaces with aggressive motions and abrupt lighting changes. These scenarios cause traditional monocular or stereo odometry to fail. While tracking motion across extra cameras should theoretically prevent failures, it causes additional complexity and computational burden. To overcome these challenges, we introduce two novel methods to improve multi-camera feature tracking. First, instead of tracking features separately in each camera, we track features continuously as they move from one camera to another. This increases accuracy and achieves a more compact factor graph representation. Second, we select a fixed budget of tracked features which are spread across the cameras to ensure that the limited computational budget is never exceeded. We have found that using a smaller set of informative features can maintain the same tracking accuracy while reducing back-end optimization time. Our proposed method was extensively tested using a hardware-synchronized device containing an IMU and four cameras (a front stereo pair and two lateral) in scenarios including an underground mine, large open spaces, and building interiors with narrow stairs and corridors. Compared to stereo-only state-of-the-art VIO methods, our approach reduces the drift rate (RPE) by up to 80% in translation and 39% in rotation.
From Oct. 2019 to Apr. 2020, LAMOST performs a time-domain spectroscopic survey of four $K$2 plates with both low- and med-resolution observations. The low-resolution spectroscopic survey gains 282 exposures ($approx$46.6 hours) over 25 nights, yield ing a total of about 767,000 spectra, and the med-resolution survey takes 177 exposures ($approx$49.1 hours) over 27 nights, collecting about 478,000 spectra. More than 70%/50% of low-resolution/med-resolution spectra have signal-to-noise ratio higher than 10. We determine stellar parameters (e.g., $T_{rm eff}$, log$g$, [Fe/H]) and radial velocity (RV) with different methods, including LASP, DD-Payne, and SLAM. In general, these parameter estimations from different methods show good agreement, and the stellar parameter values are consistent with those of APOGEE. We use the $Gaia$ DR2 RV data to calculate a median RV zero point (RVZP) for each spectrograph exposure by exposure, and the RVZP-corrected RVs agree well with the APOGEE data. The stellar evolutionary and spectroscopic masses are estimated based on the stellar parameters, multi-band magnitudes, distances and extinction values. Finally, we construct a binary catalog including about 2700 candidates by analyzing their light curves, fitting the RV data, calculating the binarity parameters from med-resolution spectra, and cross-matching the spatially resolved binary catalog from $Gaia$ EDR3. The LAMOST TD survey is expected to get breakthrough in various scientific topics, such as binary system, stellar activity, and stellar pulsation, etc.
Convolutional neural network (CNN) is one of the most widely-used successful architectures in the era of deep learning. However, the high-computational cost of CNN still hampers more universal uses to light devices. Fortunately, the Fourier transform on convolution gives an elegant and promising solution to dramatically reduce the computation cost. Recently, some studies devote to such a challenging problem and pursue the complete frequency computation without any switching between spatial domain and frequent domain. In this work, we revisit the Fourier transform theory to derive feed-forward and back-propagation frequency operations of typical network modules such as convolution, activation and pooling. Due to the calculation limitation of complex numbers on most computation tools, we especially extend the Fourier transform to the Laplace transform for CNN, which can run in the real domain with more relaxed constraints. This work more focus on a theoretical extension and discussion about frequency CNN, and lay some theoretical ground for real application.
Emphatic Temporal Difference (TD) methods are a class of off-policy Reinforcement Learning (RL) methods involving the use of followon traces. Despite the theoretical success of emphatic TD methods in addressing the notorious deadly triad (Sutton and Barto, 2018) of off-policy RL, there are still three open problems. First, the motivation for emphatic TD methods proposed by Sutton et al. (2016) does not align with the convergence analysis of Yu (2015). Namely, a quantity used by Sutton et al. (2016) that is expected to be essential for the convergence of emphatic TD methods is not used in the actual convergence analysis of Yu (2015). Second, followon traces typically suffer from large variance, making them hard to use in practice. Third, despite the seminal work of Yu (2015) confirming the asymptotic convergence of some emphatic TD methods for prediction problems, there is still no finite sample analysis for any emphatic TD method for prediction, much less control. In this paper, we address those three open problems simultaneously via using truncated followon traces in emphatic TD methods. Unlike the original followon traces, which depend on all previous history, truncated followon traces depend on only finite history, reducing variance and enabling the finite sample analysis of our proposed emphatic TD methods for both prediction and control.
We present a method to estimate depth of a dynamic scene, containing arbitrary moving objects, from an ordinary video captured with a moving camera. We seek a geometrically and temporally consistent solution to this underconstrained problem: the dept h predictions of corresponding points across frames should induce plausible, smooth motion in 3D. We formulate this objective in a new test-time training framework where a depth-prediction CNN is trained in tandem with an auxiliary scene-flow prediction MLP over the entire input video. By recursively unrolling the scene-flow prediction MLP over varying time steps, we compute both short-range scene flow to impose local smooth motion priors directly in 3D, and long-range scene flow to impose multi-view consistency constraints with wide baselines. We demonstrate accurate and temporally coherent results on a variety of challenging videos containing diverse moving objects (pets, people, cars), as well as camera motion. Our depth maps give rise to a number of depth-and-motion aware video editing effects such as object and lighting insertion.
A word is square-free if it does not contain any square (a word of the form $XX$), and is extremal square-free if it cannot be extended to a new square-free word by inserting a single letter at any position. Grytczuk, Kordulewski, and Niewiadomski pr oved that there exist infinitely many ternary extremal square-free words. We establish that there are no extremal square-free words over any alphabet of size at least 17.
Realizing edge intelligence consists of sensing, communication, training, and inference stages. Conventionally, the sensing and communication stages are executed sequentially, which results in excessive amount of dataset generation and uploading time . This paper proposes to accelerate edge intelligence via integrated sensing and communication (ISAC). As such, the sensing and communication stages are merged so as to make the best use of the wireless signals for the dual purpose of dataset generation and uploading. However, ISAC also introduces additional interference between sensing and communication functionalities. To address this challenge, this paper proposes a classification error minimization formulation to design the ISAC beamforming and time allocation. Globally optimal solution is derived via the rank-1 guaranteed semidefinite relaxation, and performance analysis is performed to quantify the ISAC gain. Simulation results are provided to verify the effectiveness of the proposed ISAC scheme. Interestingly, it is found that when the sensing time dominates the communication time, ISAC is always beneficial. However, when the communication time dominates, the edge intelligence with ISAC scheme may not be better than that with the conventional scheme, since ISAC introduces harmful interference between the sensing and communication signals.
We report electron transport studies of a thin InAs-Al hybrid semiconductor-superconductor nanowire device using a four-terminal design. Compared to previous works, thinner InAs nanowire (diameter less than 40 nm) is expected to reach fewer sub-band regime. The four-terminal device design excludes electrode contact resistance, an unknown value which has inevitably affected previously reported device conductance. Using tunneling spectroscopy, we find large zero-bias peaks (ZBPs) in differential conductance on the order of $2e^2/h$. Investigating the ZBP evolution by sweeping various gate voltages and magnetic field, we find a transition between a zero-bias peak and a zero-bias dip while the zero-bias conductance sticks close to $2e^2/h$. We discuss a topologically trivial interpretation involving disorder, smooth potential variation and quasi-Majorana zero modes.
LAMOST Data Release 5, covering $sim$17,000 $deg^2$ from $-10^{circ}$ to $80^{circ}$ in declination, contains 9 millions co-added low resolution spectra of celestial objects, each spectrum combined from repeat exposure of two to tens of times during Oct 2011 to Jun 2017. In this paper, We present the spectra of individual exposures for all the objects in LAMOST Data Release 5. For each spectrum, equivalent width of 60 lines from 11 different elements are calculated with a new method combining the actual line core and fitted line wings. For stars earlier than F type, the Balmer lines are fitted with both emission and absorption profiles once two components are detected. Radial velocity of each individual exposure is measured by minimizing ${chi}^2$ between the spectrum and its best template. Database for equivalent widths of spectral lines and radial velocities of individual spectra are available online. Radial velocity uncertainties with different stellar type and signal-to-noise ratio are quantified by comparing different exposure of the same objects. We notice that the radial velocity uncertainty depends on the time lag between observations. For stars observed in the same day and with signal-to-noise ratio higher than 20, the radial velocity uncertainty is below 5km/s, and increase to 10km/s for stars observed in different nights.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا