ﻻ يوجد ملخص باللغة العربية
We show that the spectral norm of a random $n_1times n_2times cdots times n_K$ tensor (or higher-order array) scales as $Oleft(sqrt{(sum_{k=1}^{K}n_k)log(K)}right)$ under some sub-Gaussian assumption on the entries. The proof is based on a covering number argument. Since the spectral norm is dual to the tensor nuclear norm (the tightest convex relaxation of the set of rank one tensors), the bound implies that the convex relaxation yields sample complexity that is linear in (the sum of) the number of dimensions, which is much smaller than other recently proposed convex relaxations of tensor rank that use unfolding.
Consider the classical supervised learning problem: we are given data $(y_i,{boldsymbol x}_i)$, $ile n$, with $y_i$ a response and ${boldsymbol x}_iin {mathcal X}$ a covariates vector, and try to learn a model $f:{mathcal X}to{mathbb R}$ to predict f
We investigate the construction of early stopping rules in the nonparametric regression problem where iterative learning algorithms are used and the optimal iteration number is unknown. More precisely, we study the discrepancy principle, as well as m
In data science, it is often required to estimate dependencies between different data sources. These dependencies are typically calculated using Pearsons correlation, distance correlation, and/or mutual information. However, none of these measures sa
The classical setting of community detection consists of networks exhibiting a clustered structure. To more accurately model real systems we consider a class of networks (i) whose edges may carry labels and (ii) which may lack a clustered structure.
Consider a random vector $mathbf{y}=mathbf{Sigma}^{1/2}mathbf{x}$, where the $p$ elements of the vector $mathbf{x}$ are i.i.d. real-valued random variables with zero mean and finite fourth moment, and $mathbf{Sigma}^{1/2}$ is a deterministic $ptimes