ترغب بنشر مسار تعليمي؟ اضغط هنا

Computing a Solution of Feigenbaums Functional Equation in Polynomial Time

175   0   0.0 ( 0 )
 نشر من قبل Peter Hertling
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Peter Hertling




اسأل ChatGPT حول البحث

Lanford has shown that Feigenbaums functional equation has an analytic solution. We show that this solution is a polynomial time computable function. This implies in particular that the so-called first Feigenbaum constant is a polynomial time computable real number.



قيم البحث

اقرأ أيضاً

138 - Bernd R. Schuh 2010
Whether the satisfiability of any formula F of propositional calculus can be determined in polynomial time is an open question. I propose a simple procedure based on some real world mechanisms to tackle this problem. The main result is the blueprint for a machine which is able to test any formula in conjunctive normal form (CNF) for satisfiability in linear time. The device uses light and some electrochemical properties to function. It adapts itself to the scope of the problem without growing exponentially in mass with the size of the formula. It requires infinite precision in its components instead.
We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form $mathbf{x} mapsto max(0, mathbf{w} cdot mathbf{x})$ with $mathbf{w} in mathbb{S}^{n-1}$. Our algorithm works in the challeng ing Reliable Agnostic learning model of Kalai, Kanade, and Mansour (2009) where the learner is given access to a distribution $cal{D}$ on labeled examples but the labeling may be arbitrary. We construct a hypothesis that simultaneously minimizes the false-positive rate and the loss on inputs given positive labels by $cal{D}$, for any convex, bounded, and Lipschitz loss function. The algorithm runs in polynomial-time (in $n$) with respect to any distribution on $mathbb{S}^{n-1}$ (the unit sphere in $n$ dimensions) and for any error parameter $epsilon = Omega(1/log n)$ (this yields a PTAS for a question raised by F. Bach on the complexity of maximizing ReLUs). These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where $epsilon$ must be $Omega(1)$ and strong assumptions are required on the marginal distribution. We can compose our results to obtain the first set of efficient algorithms for learning constant-depth networks of ReLUs. Our techniques combine kernel methods and polynomial approximations with a dual-loss approach to convex programming. As a byproduct we obtain a number of applications including the first set of efficient algorithms for convex piecewise-linear fitting and the first efficient algorithms for noisy polynomial reconstruction of low-weight polynomials on the unit sphere.
The Lorenz attractor was introduced in 1963 by E. N. Lorenz as one of the first examples of emph{strange attractors}. However Lorenz research was mainly based on (non-rigourous) numerical simulations and, until recently, the proof of the existence of the Lorenz attractor remained elusive. To address that problem some authors introduced geometric Lorenz models and proved that geometric Lorenz models have a strange attractor. In 2002 it was shown that the original Lorenz model behaves like a geometric Lorenz model and thus has a strange attractor. In this paper we show that geometric Lorenz attractors are computable, as well as their physical measures.
The $2$-closure $overline{G}$ of a permutation group $G$ on $Omega$ is defined to be the largest permutation group on $Omega$, having the same orbits on $OmegatimesOmega$ as $G$. It is proved that if $G$ is supersolvable, then $overline{G}$ can be fo und in polynomial time in $|Omega|$. As a byproduct of our technique, it is shown that the composition factors of $overline{G}$ are cyclic or alternating of prime degree.
We give an $n^{O(loglog n)}$-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over ${pm 1}^n$. Even in the realizable setting, the previous fastest runtime was $n^{O(log n)}$, a cons equence of a classic algorithm of Ehrenfeucht and Haussler. Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of ODonnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be pruned so that every variable in the resulting tree is influential.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا