ترغب بنشر مسار تعليمي؟ اضغط هنا

Chandran et al. (SIAM J. Comput.14) formally introduced the cryptographic task of position verification, where they also showed that it cannot be achieved by classical protocols. In this work, we initiate the study of position verification protocols with classical verifiers. We identify that proofs of quantumness (and thus computational assumptions) are necessary for such position verification protocols. For the other direction, we adapt the proof of quantumness protocol by Brakerski et al. (FOCS18) to instantiate such a position verification protocol. As a result, we achieve classically verifiable position verification assuming the quantum hardness of Learning with Errors. Along the way, we develop the notion of 1-of-2 non-local soundness for the framework of 1-of-2 puzzles, first introduced by Radian and Sattath (AFT19), which can be viewed as a computational unclonability property. We show that 1-of-2 non-local soundness follows from the standard 2-of-2 soundness, which could be of independent interest.
312 - Haoran Sun , Jie Zou , Xiaopeng Li 2021
Fermion sampling is to generate probability distribution of a many-body Slater-determinant wavefunction, which is termed determinantal point process in statistical analysis. For its inherently-embedded Pauli exclusion principle, its application reach es beyond simulating fermionic quantum many-body physics to constructing machine learning models for diversified datasets. Here we propose a fermion sampling algorithm, which has a polynomial time-complexity -- quadratic in the fermion number and linear in the system size. This algorithm is about 100% more efficient in computation time than the best known algorithms. In sampling the corresponding marginal distribution, our algorithm has a more drastic improvement, achieving a scaling advantage. We demonstrate its power on several test applications, including sampling fermions in a many-body system and a machine learning task of text summarization, and confirm its improved computation efficiency over other methods by counting floating-point operations.
Decentralized control, low-complexity, flexible and efficient communications are the requirements of an architecture that aims to scale blockchains beyond the current state. Such properties are attainable by reducing ledger size and providing paralle l operations in the blockchain. Sharding is one of the approaches that lower the burden of the nodes and enhance performance. However, the current solutions lack the features for resolving concurrency during cross-shard communications. With multiple participants belonging to different shards, handling concurrent operations is essential for optimal sharding. This issue becomes prominent due to the lack of architectural support and requires additional consensus for cross-shard communications. Inspired by hybrid Proof-of-Work/Proof-of-Stake (PoW/PoS), like Ethereum, hybrid consensus and 2-hop blockchain, we propose Reinshard, a new blockchain that inherits the properties of hybrid consensus for optimal sharding. Reinshard uses PoW and PoS chain-pairs with PoS sub-chains for all the valid chain-pairs where the hybrid consensus is attained through Verifiable Delay Function (VDF). Our architecture provides a secure method of arranging nodes in shards and resolves concurrency conflicts using the delay factor of VDF. The applicability of Reinshard is demonstrated through security and experimental evaluations. A practical concurrency problem is considered to show the efficacy of Reinshard in providing optimal sharding.
112 - Yipeng Liu , Qi Yang , Yiling Xu 2021
Point cloud compression (PCC) has made remarkable achievement in recent years. In the mean time, point cloud quality assessment (PCQA) also realize gratifying development. Some recently emerged metrics present robust performance on public point cloud assessment databases. However, these metrics have not been evaluated specifically for PCC to verify whether they exhibit consistent performance with the subjective perception. In this paper, we establish a new dataset for compression evaluation first, which contains 175 compressed point clouds in total, deriving from 7 compression algorithms with 5 compression levels. Then leveraging the proposed dataset, we evaluate the performance of the existing PCQA metrics in terms of different compression types. The results demonstrate some deficiencies of existing metrics in compression evaluation.
Ethics in AI becomes a global topic of interest for both policymakers and academic researchers. In the last few years, various research organizations, lawyers, think tankers and regulatory bodies get involved in developing AI ethics guidelines and pr inciples. However, there is still debate about the implications of these principles. We conducted a systematic literature review (SLR) study to investigate the agreement on the significance of AI principles and identify the challenging factors that could negatively impact the adoption of AI ethics principles. The results reveal that the global convergence set consists of 22 ethical principles and 15 challenges. Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles. Similarly, lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI. The findings of this study are the preliminary inputs for proposing a maturity model that assess the ethical capabilities of AI systems and provide best practices for further improvements.
Unlike well-structured text, such as news reports and encyclopedia articles, dialogue content often comes from two or more interlocutors, exchanging information with each other. In such a scenario, the topic of a conversation can vary upon progressio n and the key information for a certain topic is often scattered across multiple utterances of different speakers, which poses challenges to abstractly summarize dialogues. To capture the various topic information of a conversation and outline salient facts for the captured topics, this work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives, which are expected to implicitly model the topic change and handle information scattering challenges for the dialogue summarization task. The proposed contrastive objectives are framed as auxiliary tasks for the primary dialogue summarization task, united via an alternative parameter updating strategy. Extensive experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines and achieves new state-of-the-art performance. The code and trained models are publicly available via href{https://github.com/Junpliu/ConDigSum}{https://github.com/Junpliu/ConDigSum}.
A verifiable random function (VRF in short) is a powerful pseudo-random function that provides a non-interactively public verifiable proof for the correctness of its output. Recently, VRFs have found essential applications in blockchain design, such as random beacons and proof-of-stake consensus protocols. To our knowledge, the first generation of blockchain systems used inherently inefficient proof-of-work consensuses, and the research community tried to achieve the same properties by proposing proof-of-stake schemes where resource-intensive proof-of-work is emulated by cryptographic constructions. Unfortunately, those most discussed proof-of-stake consensuses (e.g., Algorand and Ouroborous family) are not future-proof because the building blocks are secure only under the classical hard assumptions; in particular, their designs ignore the advent of quantum computing and its implications. In this paper, we propose a generic compiler to obtain the post-quantum VRF from the simple VRF solution using symmetric-key primitives (e.g., non-interactive zero-knowledge system) with an intrinsic property of quantum-secure. Our novel solution is realized via two efficient zero-knowledge systems ZKBoo and ZKB++, respectively, to validate the compiler correctness. Our proof-of-concept implementation indicates that even today, the overheads introduced by our solution are acceptable in real-world deployments. We also demonstrate potential applications of a quantum-secure VRF, such as quantum-secure decentralized random beacon and lottery-based proof of stake consensus blockchain protocol.
94 - Yichao Yan , Jinpeng Li , Jie Qin 2021
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images. To achieve this goal, state-of-the-art models typically add a re-id branch upon two-stage detectors like Faster R-CNN. Owing to the ROI-Align operation, this pipeline yields promising accuracy as re-id features are explicitly aligned with the corresponding object regions, but in the meantime, it introduces high computational overhead due to dense object anchors. In this work, we present an anchor-free approach to efficiently tackling this challenging task, by introducing the following dedicated designs. First, we select an anchor-free detector (i.e., FCOS) as the prototype of our framework. Due to the lack of dense object anchors, it exhibits significantly higher efficiency compared with existing person search models. Second, when directly accommodating this anchor-free detector for person search, there exist several major challenges in learning robust re-id features, which we summarize as the misalignment issues in different levels (i.e., scale, region, and task). To address these issues, we propose an aligned feature aggregation module to generate more discriminative and robust feature embeddings. Accordingly, we name our model as Feature-Aligned Person Search Network (AlignPS). Third, by investigating the advantages of both anchor-based and anchor-free models, we further augment AlignPS with an ROI-Align head, which significantly improves the robustness of re-id features while still keeping our model highly efficient. Extensive experiments conducted on two challenging benchmarks (i.e., CUHK-SYSU and PRW) demonstrate that our framework achieves state-of-the-art or competitive performance, while displaying higher efficiency. All the source codes, data, and trained models are available at: https://github.com/daodaofr/alignps.
We introduce a new semantic communication mechanism, whose key idea is to preserve the semantic information instead of strictly securing the bit-level precision. Starting by analyzing the defects of existing joint source channel coding (JSCC) methods , we show that the commonly used bit-level metrics are vulnerable of catching important semantic meaning and structures. To address this problem, we take advantage of learning from semantic similarity, instead of relying on conventional paired bit-level supervisions like cross entropy and bit error rate. However, to develop such a semantic communication system is indeed a nontrivial task, considering the nondifferentiability of most semantic metrics as well as the instability from noisy channels. To further resolve these issues, we put forward a reinforcement learning (RL)-based solution which allows us to simultaneously optimize any user-defined semantic measurement by using the policy gradient technique, and to interact with the surrounding noisy environment in a natural way. We have testified the proposed method in the challenging European-parliament dataset. Experiments on both AWGN and phase-invariant fading channel have confirmed the superiority of our method in revealing the semantic meanings, and better handling the channel noise especially in low-SNR situations. Apart from the experimental results, we further provide an indepth look at how the semantics model behaves, along with its superb generalization ability in real-life examples. As a brand new method in learning-based JSCC tasks, we also exemplify an RL-based image transmission paradigm, both to prove the generalization ability, and to leave this new topic for future discussion.
We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for those variants. Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWE-like quantum states with interesting parameters.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا