Do you want to publish a course? Click here

Non-trivial solutions to a linear equation in integers

65   0   0.0 ( 0 )
 Added by Boris Bukh
 Publication date 2007
  fields
and research's language is English
 Authors Boris Bukh




Ask ChatGPT about the research

For k>=3 let A subset [1,N] be a set not containing a solution to a_1 x_1+...+a_k x_k=a_1 x_{k+1}+...+a_k x_{2k} in distinct integers. We prove that there is an epsilon>0 depending on the coefficients of the equation such that every such A has O(N^{1/2-epsilon}) elements. This answers a question of I. Ruzsa.



rate research

Read More

We give a classification of the cuspidal automorphic representations attached to rational elliptic curves with a non-trivial torsion point of odd order. Such elliptic curves are parameterizable, and in this paper, we find the necessary and sufficient conditions on the parameters to determine when split or non-split multiplicative reduction occurs. Using this and the known results on when additive reduction occurs for these parametrized curves, we classify the automorphic representations in terms of the parameters.
267 - Daniele Garrisi 2010
We consider a system of two coupled non-linear Klein-Gordon equations. We show the existence of standing waves solutions and the existence of a Lyapunov function for the ground state.
We study the time analyticity of ancient solutions to heat equations on graphs. Analogous to Dong and Zhang [DZ19], we prove the time analyticity of ancient solutions on graphs under some sharp growth condition.
Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation based dimensionality reduction method Linear Discriminant Analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through ${ell}_{2,1}$-norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminative features and removing the redundant ones simultaneously. The formulation is extended to the ${ell}_{2,p}$-norm regularized case: which is more likely to offer better sparsity when $0<p<1$. Thus the formulation is a better approximation to the feature selection problem. An efficient algorithm is developed to solve the ${ell}_{2,p}$-norm based optimization problem and it is proved that the algorithm converges when $0<ple 2$. Systematical experiments are conducted to understand the work of the proposed method. Promising experimental results on various types of real-world data sets demonstrate the effectiveness of our algorithm.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا