ترغب بنشر مسار تعليمي؟ اضغط هنا

511 - Feng Wang , Hang Zhou , Han Fang 2021
Robust 3D mesh watermarking is a traditional research topic in computer graphics, which provides an efficient solution to the copyright protection for 3D meshes. Traditionally, researchers need manually design watermarking algorithms to achieve suffi cient robustness for the actual application scenarios. In this paper, we propose the first deep learning-based 3D mesh watermarking framework, which can solve this problem once for all. In detail, we propose an end-to-end network, consisting of a watermark embedding sub-network, a watermark extracting sub-network and attack layers. We adopt the topology-agnostic graph convolutional network (GCN) as the basic convolution operation for 3D meshes, so our network is not limited by registered meshes (which share a fixed topology). For the specific application scenario, we can integrate the corresponding attack layers to guarantee adaptive robustness against possible attacks. To ensure the visual quality of watermarked 3D meshes, we design a curvature-based loss function to constrain the local geometry smoothness of watermarked meshes. Experimental results show that the proposed method can achieve more universal robustness and faster watermark embedding than baseline methods while guaranteeing comparable visual quality.
Observational signatures of the circumstellar material (CSM) around Type Ia supernovae (SNe Ia) provide a unique perspective to the progenitor systems. The pre-supernova evolution of the SN progenitors may naturally eject CSM in most of the popular s cenarios of SN Ia explosions. In this study, we investigate the influence of dust scattering on the light curves and polarizations of SNe Ia. A Monte Carlo method is constructed to numerically solve the radiative transfer process through the CSM. Three types of geometric distributions of the CSM are considered: spherical shell, axisymmetric disk, and axisymmetric shell. We show that both the distance of the dust to the SNe and the geometric distribution of the dust affect the light curve and color evolutions of SNe. Contrary to previous studies, we found that the geometric location of the hypothetical CS dust cannot be reliably constrained based on photometric data alone even for the best observed cases such as SN 2006X and SN~2014J, and time dependent polarimetry is an inimitable way to establish the geometric location of any dusty CSM. Our model results show that time sequence of broad-band polarimetry with appropriate time coverage from a months to about one year after explosion can provide unambiguous limits on the presence of CS dust around SNe Ia.
Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the inp ut image, and question for answer prediction. However, this two-step approach could lead to mismatches that potentially limit the VQA performance. For example, the retrieved knowledge might be noisy and irrelevant to the question, and the re-embedded knowledge features during reasoning might deviate from their original meanings in the knowledge base (KB). To address this challenge, we propose PICa, a simple yet effective method that Prompts GPT3 via the use of Image Captions, for knowledge-based VQA. Inspired by GPT-3s power in knowledge retrieval and question answering, instead of using structured KBs as in previous work, we treat GPT-3 as an implicit and unstructured KB that can jointly acquire and process relevant knowledge. Specifically, we first convert the image into captions (or tags) that GPT-3 can understand, then adapt GPT-3 to solve the VQA task in a few-shot manner by just providing a few in-context VQA examples. We further boost performance by carefully investigating: (i) what text formats best describe the image content, and (ii) how in-context examples can be better selected and used. PICa unlocks the first use of GPT-3 for multimodal tasks. By using only 16 examples, PICa surpasses the supervised state of the art by an absolute +8.6 points on the OK-VQA dataset. We also benchmark PICa on VQAv2, where PICa also shows a decent few-shot performance.
As an emerging technique for confidential computing, trusted execution environment (TEE) receives a lot of attention. To better develop, deploy, and run secure applications on a TEE platform such as Intels SGX, both academic and industrial teams have devoted much effort to developing reliable and convenient TEE containers. In this paper, we studied the isolation strategies of 15 existing TEE containers to protect secure applications from potentially malicious operating systems (OS) or untrusted applications, using a semi-automatic approach combining a feedback-guided analyzer with manual code review. Our analysis reveals the isolation protection each of these TEE containers enforces, and their security weaknesses. We observe that none of the existing TEE containers can fulfill the goal they set, due to various pitfalls in their design and implementation. We report the lessons learnt from our study for guiding the development of more secure containers, and further discuss the trend of TEE container designs. We also release our analyzer that helps evaluate the container middleware both from the enclave and from the kernel.
Mobile edge computing (MEC) integrated with multiple radio access technologies (RATs) is a promising technique for satisfying the growing low-latency computation demand of emerging intelligent internet of things (IoT) applications. Under the distribu ted MapReduce framework, this paper investigates the joint RAT selection and transceiver design for over-the-air (OTA) aggregation of intermediate values (IVAs) in wireless multiuser MEC systems, while taking into account the energy budget constraint for the local computing and IVA transmission per wireless device (WD). We aim to minimize the weighted sum of the computation mean squared error (MSE) of the aggregated IVA at the RAT receivers, the WDs IVA transmission cost, and the associated transmission time delay, which is a mixed-integer and non-convex problem. Based on the Lagrange duality method and primal decomposition, we develop a low-complexity algorithm by solving the WDs RAT selection problem, the WDs transmit coefficients optimization problem, and the aggregation beamforming problem. Extensive numerical results are provided to demonstrate the effectiveness and merit of our proposed algorithm as compared with other existing schemes.
This paper is concerned with the Tucker decomposition based low rank tensor completion problem, which is about reconstructing a tensor $mathcal{T}inmathbb{R}^{ntimes ntimes n}$ of a small multilinear rank from partially observed entries. We study the convergence of the Riemannian gradient method for this problem. Guaranteed linear convergence in terms of the infinity norm has been established for this algorithm provided the number of observed entries is essentially in the order of $O(n^{3/2})$. The convergence analysis relies on the leave-one-out technique and the subspace projection structure within the algorithm. To the best of our knowledge, this is the first work that has established the entrywise convergence of a non-convex algorithm for low rank tensor completion via Tucker decomposition.
Iterative self-consistent parallel imaging reconstruction (SPIRiT) is an effective self-calibrated reconstruction model for parallel magnetic resonance imaging (PMRI). The joint L1 norm of wavelet coefficients and joint total variation (TV) regulariz ation terms are incorporated into the SPIRiT model to improve the reconstruction performance. The simultaneous two-directional low-rankness (STDLR) in k-space data is incorporated into SPIRiT to realize improved reconstruction. Recent methods have exploited the nonlocal self-similarity (NSS) of images by imposing nonlocal low-rankness of similar patches to achieve a superior performance. To fully utilize both the NSS in Magnetic resonance (MR) images and calibration consistency in the k-space domain, we propose a nonlocal low-rank (NLR)-SPIRiT model by incorporating NLR regularization into the SPIRiT model. We apply the weighted nuclear norm (WNN) as a surrogate of the rank and employ the Nash equilibrium (NE) formulation and alternating direction method of multipliers (ADMM) to efficiently solve the NLR-SPIRiT model. The experimental results demonstrate the superior performance of NLR-SPIRiT over the state-of-the-art methods via three objective metrics and visual comparison.
Recently, about five hundred fast radio bursts (FRBs) detected by CHIME/FRB Project have been reported. The vast amounts of data would make FRBs a promising low-redshift cosmological probe in the forthcoming years, and thus the issue of how many FRBs are needed for precise cosmological parameter estimation in different dark energy models should be detailedly investigated. Different from the usually considered $w(z)$-parameterized models in the literature, in this work we investigate the holographic dark energy (HDE) model and the Ricci dark energy (RDE) model, which originate from the holographic principle of quantum gravity, using the simulated localized FRB data as a cosmological probe for the first time. We show that the Hubble constant $H_0$ can be constrained to about 2% precision in the HDE model with the Macquart relation of FRB by using 10000 accurately-localized FRBs combined with the current CMB data, which is similar to the precision of the SH0ES value. Using 10000 localized FRBs combined with the CMB data can achieve about 6% constraint on the dark-energy parameter $c$ in the HDE model, which is tighter than the current BAO data combined with CMB. We also study the combination of the FRB data and another low-redshift cosmological probe, i.e. gravitational wave (GW) standard siren data, with the purpose of measuring cosmological parameters independent of CMB. Although the parameter degeneracies inherent in FRB and in GW are rather different, we find that more than 10000 FRBs are demanded to effectively improve the constraints in the holographic dark energy models.
This paper presents and analyzes an immersed finite element (IFE) method for solving Stokes interface problems with a piecewise constant viscosity coefficient that has a jump across the interface. In the method, the triangulation does not need to fit the interface and the IFE spaces are constructed from the traditional $CR$-$P_0$ element with modifications near the interface according to the interface jump conditions. We prove that the IFE basis functions are unisolvent on arbitrary interface elements and the IFE spaces have the optimal approximation capabilities, although the proof is challenging due to the coupling of the velocity and the pressure. The stability and the optimal error estimates of the proposed IFE method are also derived rigorously. The constants in the error estimates are shown to be independent of the interface location relative to the triangulation. Numerical examples are provided to verify the theoretical results.
In this paper, an important discovery has been found for nonconforming immersed finite element (IFE) methods using integral-value degrees of freedom for solving elliptic interface problems. We show that those IFE methods can only achieve suboptimal c onvergence rates (i.e., $O(h^{1/2})$ in the $H^1$ norm and $O(h)$ in the $L^2$ norm) if the tangential derivative of the exact solution and the jump of the coefficient are not zero on the interface. A nontrivial counter example is also provided to support our theoretical analysis. To recover the optimal convergence rates, we develop a new nonconforming IFE method with additional terms locally on interface edges. The unisolvence of IFE basis functions is proved on arbitrary triangles. Furthermore, we derive the optimal approximation capabilities of both the Crouzeix-Raviart and the rotated-$Q_1$ IFE spaces for interface problems with variable coefficients via a unified approach different from multipoint Taylor expansions. Finally, optimal error estimates in both $H^1$- and $L^2$- norms are proved and confirmed with numerical experiments.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا