ترغب بنشر مسار تعليمي؟ اضغط هنا

On generalized Fermat Diophantine functional and partial differential equations in $mathbf{C}^2$

84   0   0.0 ( 0 )
 نشر من قبل Qi Han
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we characterize meromorphic solutions $f(z_1,z_2),g(z_1,z_2)$ to the generalized Fermat Diophantine functional equations $h(z_1,z_2)f^m+k(z_1,z_2)g^n=1$ in $mathbf{C}^2$ for integers $m,ngeq2$ and nonzero meromorphic functions $h(z_1,z_2),k(z_1,z_2)$ in $mathbf{C}^2$. Meromorphic solutions to associated partial differential equations are also studied.

قيم البحث

اقرأ أيضاً

The concept of moment differentiation is extended to the class of moment summable functions, giving rise to moment differential properties. The main result leans on accurate upper estimates for the integral representation of the moment derivatives of functions under exponential-like growth at infinity, and appropriate deformation of the integration paths. The theory is applied to obtain summability results of certain family of generalized linear moment partial differential equations with variable coefficients.
We investigate the existence of non-trivial holomorphic and meromorphic solutions of Fermat functional equations over an open Riemann surface $S$. When $S$ is hyperbolic, we prove that any $k$-term Fermat functional equation always exists non-trivial holomorphic and meromorphic solution. When $S$ is a general open Riemann surface, we prove that every non-trivial holomorphic or meromorphic solution satisfies a growth condition, provided that the power exponents of the equations are bigger than some certain positive integers.
81 - Angelos Koutsianas 2018
In this paper, we prove that the only primitive solutions of the equation $a^2+3b^6=c^n$ for $ngeq 3$ are $(a,b,c,n)=(pm 47,pm 2,pm 7,4)$. Our proof is based on the modularity of Galois representations of $mathbb Q$-curves and the work of Ellenberg f or big values of $n$ and a variety of techniques for small $n$.
112 - D.A. Bignamini , S. Ferrari 2020
Let $mathcal{X}$ be a real separable Hilbert space. Let $Q$ be a linear, self-adjoint, positive, trace class operator on $mathcal{X}$, let $F:mathcal{X}rightarrowmathcal{X}$ be a (smooth enough) function and let ${W(t)}_{tgeq 0}$ be a $mathcal{X}$-va lued cylindrical Wiener process. For $alphain [0,1/2]$ we consider the operator $A:=-(1/2)Q^{2alpha-1}:Q^{1-2alpha}(mathcal{X})subseteqmathcal{X}rightarrowmathcal{X}$. We are interested in the mild solution $X(t,x)$ of the semilinear stochastic partial differential equation begin{gather} left{begin{array}{ll} dX(t,x)=big(AX(t,x)+F(X(t,x))big)dt+ Q^{alpha}dW(t), & t>0; X(0,x)=xin mathcal{X}, end{array} right. end{gather} and in its associated transition semigroup begin{align} P(t)varphi(x):=E[varphi(X(t,x))], qquad varphiin B_b(mathcal{X}), tgeq 0, xin mathcal{X}; end{align} where $B_b(mathcal{X})$ is the space of the real-valued, bounded and Borel measurable functions on $mathcal{X}$. In this paper we study the behavior of the semigroup $P(t)$ in the space $L^2(mathcal{X}, u)$, where $ u$ is the unique invariant probability measure of eqref{Tropical}, when $F$ is dissipative and has polynomial growth. Then we prove the logarithmic Sobolev and the Poincare inequalities and we study the maximal Sobolev regularity for the stationary equation [lambda u-N_2 u=f,qquad lambda>0, fin L^2(mathcal{X}, u);] where $N_2$ is the infinitesimal generator of $P(t)$ in $L^2(mathcal{X}, u)$.
In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs). Relaxation techniques arising in statistical physics which have already been used s uccessfully in this context are reinterpreted as solutions of a viscous Hamilton-Jacobi PDE. Using a stochastic control interpretation allows we prove that the modified algorithm performs better in expectation that stochastic gradient descent. Well-known PDE regularity results allow us to analyze the geometry of the relaxed energy landscape, confirming empirical evidence. The PDE is derived from a stochastic homogenization problem, which arises in the implementation of the algorithm. The algorithms scale well in practice and can effectively tackle the high dimensionality of modern neural networks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا