Do you want to publish a course? Click here

Perron-based algorithms for the multilinear pagerank

88   0   0.0 ( 0 )
 Added by Federico G. Poloni
 Publication date 2017
  fields
and research's language is English




Ask ChatGPT about the research

We consider the multilinear pagerank problem studied in [Gleich, Lim and Yu, Multilinear Pagerank, 2015], which is a system of quadratic equations with stochasticity and nonnegativity constraints. We use the theory of quadratic vector equations to prove several properties of its solutions and suggest new numerical algorithms. In particular, we prove the existence of a certain minimal solution, which does not always coincide with the stochastic one that is required by the problem. We use an interpretation of the solution as a Perron eigenvector to devise new fixed-point algorithms for its computation, and pair them with a homotopy continuation strategy. The resulting numerical method is more reliable than the existing alternatives, being able to solve a larger number of problems.



rate research

Read More

The multilinear Pagerank model [Gleich, Lim and Yu, 2015] is a tensor-based generalization of the Pagerank model. Its computation requires solving a system of polynomial equations that contains a parameter $alpha in [0,1)$. For $alpha approx 1$, this computation remains a challenging problem, especially since the solution may be non-unique. Extrapolation strategies that start from smaller values of $alpha$ and `follow the solution by slowly increasing this parameter have been suggested; however, there are known cases where these strategies fail, because a globally continuous solution curve cannot be defined as a function of $alpha$. In this paper, we improve on this idea, by employing a predictor-corrector continuation algorithm based on a more general representation of the solutions as a curve in $mathbb{R}^{n+1}$. We prove several global properties of this curve that ensure the good behavior of the algorithm, and we show in our numerical experiments that this method is significantly more reliable than the existing alternatives.
Given a graph $G$, a source node $s$ and a target node $t$, the personalized PageRank (PPR) of $t$ with respect to $s$ is the probability that a random walk starting from $s$ terminates at $t$. An important variant of the PPR query is single-source PPR (SSPPR), which enumerates all nodes in $G$, and returns the top-$k$ nodes with the highest PPR values with respect to a given source $s$. PPR in general and SSPPR in particular have important applications in web search and social networks, e.g., in Twitters Who-To-Follow recommendation service. However, PPR computation is known to be expensive on large graphs, and resistant to indexing. Consequently, previous solutions either use heuristics, which do not guarantee result quality, or rely on the strong computing power of modern data centers, which is costly. Motivated by this, we propose effective index-free and index-based algorithms for approximate PPR processing, with rigorous guarantees on result quality. We first present FORA, an approximate SSPPR solution that combines two existing methods Forward Push (which is fast but does not guarantee quality) and Monte Carlo Random Walk (accurate but slow) in a simple and yet non-trivial way, leading to both high accuracy and efficiency. Further, FORA includes a simple and effective indexing scheme, as well as a module for top-$k$ selection with high pruning power. Extensive experiments demonstrate that the proposed solutions are orders of magnitude more efficient than their respective competitors. Notably, on a billion-edge Twitter dataset, FORA answers a top-500 approximate SSPPR query within 1 second, using a single commodity server.
Reduced model spaces, such as reduced basis and polynomial chaos, are linear spaces $V_n$ of finite dimension $n$ which are designed for the efficient approximation of families parametrized PDEs in a Hilbert space $V$. The manifold $mathcal{M}$ that gathers the solutions of the PDE for all admissible parameter values is globally approximated by the space $V_n$ with some controlled accuracy $epsilon_n$, which is typically much smaller than when using standard approximation spaces of the same dimension such as finite elements. Reduced model spaces have also been proposed in [13] as a vehicle to design a simple linear recovery algorithm of the state $uinmathcal{M}$ corresponding to a particular solution when the values of parameters are unknown but a set of data is given by $m$ linear measurements of the state. The measurements are of the form $ell_j(u)$, $j=1,dots,m$, where the $ell_j$ are linear functionals on $V$. The analysis of this approach in [2] shows that the recovery error is bounded by $mu_nepsilon_n$, where $mu_n=mu(V_n,W)$ is the inverse of an inf-sup constant that describe the angle between $V_n$ and the space $W$ spanned by the Riesz representers of $(ell_1,dots,ell_m)$. A reduced model space which is efficient for approximation might thus be ineffective for recovery if $mu_n$ is large or infinite. In this paper, we discuss the existence and construction of an optimal reduced model space for this recovery method, and we extend our search to affine spaces. Our basic observation is that this problem is equivalent to the search of an optimal affine algorithm for the recovery of $mathcal{M}$ in the worst case error sense. This allows us to perform our search by a convex optimization procedure. Numerical tests illustrate that the reduced model spaces constructed from our approach perform better than the classical reduced basis spaces.
Following criticisms against the journal Impact Factor, new journal influence scores have been developed such as the Eigenfactor or the Prestige Scimago Journal Rank. They are based on PageRank type algorithms on the cross-citations transition matrix of the citing-cited network. The PageRank algorithm performs a smoothing of the transition matrix combining a random walk on the data network and a teleportation to all possible nodes with fixed probabilities (the damping factor being $alpha= 0.85$). We reinterpret this smoothing matrix as the mean of a posterior distribution of a Dirichlet-multinomial model in an empirical Bayes perspective. We suggest a simple yet efficient way to make a clear distinction between structural and sampling zeroes. This allows us to contrast cases with self-citations included or excluded to avoid overvalued journal bias. We estimate the model parameters by maximizing the marginal likelihood with a Majorize-Minimize algorithm. The procedure ends up with a score similar to the PageRank ones but with a damping factor depending on each journal. The procedures are illustrated with an example about cross-citations among 47 statistical journals studied by Varin et. al. (2016).
62 - Wenzhong Zhang , Wei Cai 2020
In this paper, we propose forward and backward stochastic differential equations (FBSDEs) based deep neural network (DNN) learning algorithms for the solution of high dimensional quasilinear parabolic partial differential equations (PDEs), which are related to the FBSDEs by the Pardoux-Peng theory. The algorithms rely on a learning process by minimizing the pathwise difference between two discrete stochastic processes, defined by the time discretization of the FBSDEs and the DNN representation of the PDE solutions, respectively. The proposed algorithms are shown to generate DNN solutions for a 100-dimensional Black--Scholes--Barenblatt equation, accurate in a finite region in the solution space, and has a convergence rate similar to that of the Euler--Maruyama discretization used for the FBSDEs. As a result, a Richardson extrapolation technique over time discretizations can be used to enhance the accuracy of the DNN solutions. For time oscillatory solutions, a multiscale DNN is shown to improve the performance of the FBSDE DNN for high frequencies.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا