ترغب بنشر مسار تعليمي؟ اضغط هنا

How do supply and demand from informed traders drive market prices of bitcoin options? Deribit options tick-level data supports the limits-to-arbitrage hypothesis about market makers supply. The main demand-side effects are that at-the-money option p rices are largely driven by volatility traders and out-of-the-money options are simultaneously driven by volatility traders and those with proprietary information about the direction of future bitcoin price movements. The demand-side trading results contrast with prior studies on established options markets in the US and Asia, but we also show that Deribit is rapidly evolving into a more efficient channel for aggregating information from informed traders.
As an emerging data modal with precise distance sensing, LiDAR point clouds have been placed great expectations on 3D scene understanding. However, point clouds are always sparsely distributed in the 3D space, and with unstructured storage, which mak es it difficult to represent them for effective 3D object detection. To this end, in this work, we regard point clouds as hollow-3D data and propose a new architecture, namely Hallucinated Hollow-3D R-CNN ($text{H}^2$3D R-CNN), to address the problem of 3D object detection. In our approach, we first extract the multi-view features by sequentially projecting the point clouds into the perspective view and the bird-eye view. Then, we hallucinate the 3D representation by a novel bilaterally guided multi-view fusion block. Finally, the 3D objects are detected via a box refinement module with a novel Hierarchical Voxel RoI Pooling operation. The proposed $text{H}^2$3D R-CNN provides a new angle to take full advantage of complementary information in the perspective view and the bird-eye view with an efficient framework. We evaluate our approach on the public KITTI Dataset and Waymo Open Dataset. Extensive experiments demonstrate the superiority of our method over the state-of-the-art algorithms with respect to both effectiveness and efficiency. The code will be made available at url{https://github.com/djiajunustc/H-23D_R-CNN}.
In this paper, we are concerned with the gradient estimate of the electric field due to two nearly touching dielectric inclusions, which is a central topic in the theory of composite materials. We derive accurate quantitative characterisations of the gradient fields in the transverse electromagnetic case within the quasi-static regime, which clearly indicate the optimal blowup rate or non-blowup of the gradient fields in different scenarios. There are mainly two novelties of our study. First, the sizes of the two material inclusions may be of different scales. Second, we consider our study in the quasi-static regime, whereas most of the existing studies are concerned with the static case.
As cameras are increasingly deployed in new application domains such as autonomous driving, performing 3D object detection on monocular images becomes an important task for visual scene understanding. Recent advances on monocular 3D object detection mainly rely on the ``pseudo-LiDAR generation, which performs monocular depth estimation and lifts the 2D pixels to pseudo 3D points. However, depth estimation from monocular images, due to its poor accuracy, leads to inevitable position shift of pseudo-LiDAR points within the object. Therefore, the predicted bounding boxes may suffer from inaccurate location and deformed shape. In this paper, we present a novel neighbor-voting method that incorporates neighbor predictions to ameliorate object detection from severely deformed pseudo-LiDAR point clouds. Specifically, each feature point around the object forms their own predictions, and then the ``consensus is achieved through voting. In this way, we can effectively combine the neighbors predictions with local prediction and achieve more accurate 3D detection. To further enlarge the difference between the foreground region of interest (ROI) pseudo-LiDAR points and the background points, we also encode the ROI prediction scores of 2D foreground pixels into the corresponding pseudo-LiDAR points. We conduct extensive experiments on the KITTI benchmark to validate the merits of our proposed method. Our results on the birds eye view detection outperform the state-of-the-art performance by a large margin, especially for the ``hard level detection.
Temporal language grounding (TLG) is a fundamental and challenging problem for vision and language understanding. Existing methods mainly focus on fully supervised setting with temporal boundary labels for training, which, however, suffers expensive cost of annotation. In this work, we are dedicated to weakly supervised TLG, where multiple description sentences are given to an untrimmed video without temporal boundary labels. In this task, it is critical to learn a strong cross-modal semantic alignment between sentence semantics and visual content. To this end, we introduce a novel weakly supervised temporal adjacent network (WSTAN) for temporal language grounding. Specifically, WSTAN learns cross-modal semantic alignment by exploiting temporal adjacent network in a multiple instance learning (MIL) paradigm, with a whole description paragraph as input. Moreover, we integrate a complementary branch into the framework, which explicitly refines the predictions with pseudo supervision from the MIL stage. An additional self-discriminating loss is devised on both the MIL branch and the complementary branch, aiming to enhance semantic discrimination by self-supervising. Extensive experiments are conducted on three widely used benchmark datasets, emph{i.e.}, ActivityNet-Captions, Charades-STA, and DiDeMo, and the results demonstrate the effectiveness of our approach.
84 - Ou Wu , Weiyao Zhu , Yingjun Deng 2021
A common assumption in machine learning is that samples are independently and identically distributed (i.i.d). However, the contributions of different samples are not identical in training. Some samples are difficult to learn and some samples are noi sy. The unequal contributions of samples has a considerable effect on training performances. Studies focusing on unequal sample contributions (e.g., easy, hard, noisy) in learning usually refer to these contributions as robust machine learning (RML). Weighing and regularization are two common techniques in RML. Numerous learning algorithms have been proposed but the strategies for dealing with easy/hard/noisy samples differ or even contradict with different learning algorithms. For example, some strategies take the hard samples first, whereas some strategies take easy first. Conducting a clear comparison for existing RML algorithms in dealing with different samples is difficult due to lack of a unified theoretical framework for RML. This study attempts to construct a mathematical foundation for RML based on the bias-variance trade-off theory. A series of definitions and properties are presented and proved. Several classical learning algorithms are also explained and compared. Improvements of existing methods are obtained based on the comparison. A unified method that combines two classical learning strategies is proposed.
In this paper, we present a neat yet effective transformer-based framework for visual grounding, namely TransVG, to address the task of grounding a language query to the corresponding region onto an image. The state-of-the-art methods, including two- stage or one-stage ones, rely on a complex module with manually-designed mechanisms to perform the query reasoning and multi-modal fusion. However, the involvement of certain mechanisms in fusion module design, such as query decomposition and image scene graph, makes the models easily overfit to datasets with specific scenarios, and limits the plenitudinous interaction between the visual-linguistic context. To avoid this caveat, we propose to establish the multi-modal correspondence by leveraging transformers, and empirically show that the complex fusion modules (eg, modular attention network, dynamic graph, and multi-modal tree) can be replaced by a simple stack of transformer encoder layers with higher performance. Moreover, we re-formulate the visual grounding as a direct coordinates regression problem and avoid making predictions out of a set of candidates (emph{i.e.}, region proposals or anchor boxes). Extensive experiments are conducted on five widely used datasets, and a series of state-of-the-art records are set by our TransVG. We build the benchmark of transformer-based visual grounding framework and make the code available at url{https://github.com/djiajunustc/TransVG}.
Transmission eigenfunctions are certain interior resonant modes that are of central importance to the wave scattering theory. In this paper, we present the discovery of novel global rigidity properties of the transmission eigenfunctions associated wi th the Maxwell system. It is shown that the transmission eigenfunctions carry the geometrical and topological information of the underlying domain. We present both analytical and numerical results of these intriguing rigidity properties. As an interesting application, we propose an illusion scheme of artificially generating a mirage image of any given optical object.
Consider the transmission eigenvalue problem [ (Delta+k^2mathbf{n}^2) w=0, (Delta+k^2)v=0 mbox{in} Omega;quad w=v, partial_ u w=partial_ u v=0 mbox{on} partialOmega. ] It is shown in [12] that there exists a sequence of eigenfunctions $(w_m, v_ m)_{minmathbb{N}}$ associated with $k_mrightarrow infty$ such that either ${w_m}_{minmathbb{N}}$ or ${v_m}_{minmathbb{N}}$ are surface-localized, depending on $mathbf{n}>1$ or $0<mathbf{n}<1$. In this paper, we discover a new type of surface-localized transmission eigenmodes by constructing a sequence of transmission eigenfunctions $(w_m, v_m)_{minmathbb{N}}$ associated with $k_mrightarrow infty$ such that both ${w_m}_{minmathbb{N}}$ and ${v_m}_{minmathbb{N}}$ are surface-localized, no matter $mathbf{n}>1$ or $0<mathbf{n}<1$. Though our study is confined within the radial geometry, the construction is subtle and technical.
Using the generalized extreme value theory to characterize tail distributions, we address liquidation, leverage, and optimal margins for bitcoin long and short futures positions. The empirical analysis of perpetual bitcoin futures on BitMEX shows tha t (1) daily forced liquidations to out- standing futures are substantial at 3.51%, and 1.89% for long and short; (2) investors got forced liquidation do trade aggressively with average leverage of 60X; and (3) exchanges should elevate current 1% margin requirement to 33% (3X leverage) for long and 20% (5X leverage) for short to reduce the daily margin call probability to 1%. Our results further suggest normality assumption on return significantly underestimates optimal margins. Policy implications are also discussed.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا