Do you want to publish a course? Click here

Computing a Solution of Feigenbaums Functional Equation in Polynomial Time

161   0   0.0 ( 0 )
 Added by Peter Hertling
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Lanford has shown that Feigenbaums functional equation has an analytic solution. We show that this solution is a polynomial time computable function. This implies in particular that the so-called first Feigenbaum constant is a polynomial time computable real number.

rate research

Read More

118 - Bernd R. Schuh 2010
Whether the satisfiability of any formula F of propositional calculus can be determined in polynomial time is an open question. I propose a simple procedure based on some real world mechanisms to tackle this problem. The main result is the blueprint for a machine which is able to test any formula in conjunctive normal form (CNF) for satisfiability in linear time. The device uses light and some electrochemical properties to function. It adapts itself to the scope of the problem without growing exponentially in mass with the size of the formula. It requires infinite precision in its components instead.
We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form $mathbf{x} mapsto max(0, mathbf{w} cdot mathbf{x})$ with $mathbf{w} in mathbb{S}^{n-1}$. Our algorithm works in the challenging Reliable Agnostic learning model of Kalai, Kanade, and Mansour (2009) where the learner is given access to a distribution $cal{D}$ on labeled examples but the labeling may be arbitrary. We construct a hypothesis that simultaneously minimizes the false-positive rate and the loss on inputs given positive labels by $cal{D}$, for any convex, bounded, and Lipschitz loss function. The algorithm runs in polynomial-time (in $n$) with respect to any distribution on $mathbb{S}^{n-1}$ (the unit sphere in $n$ dimensions) and for any error parameter $epsilon = Omega(1/log n)$ (this yields a PTAS for a question raised by F. Bach on the complexity of maximizing ReLUs). These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where $epsilon$ must be $Omega(1)$ and strong assumptions are required on the marginal distribution. We can compose our results to obtain the first set of efficient algorithms for learning constant-depth networks of ReLUs. Our techniques combine kernel methods and polynomial approximations with a dual-loss approach to convex programming. As a byproduct we obtain a number of applications including the first set of efficient algorithms for convex piecewise-linear fitting and the first efficient algorithms for noisy polynomial reconstruction of low-weight polynomials on the unit sphere.
The Lorenz attractor was introduced in 1963 by E. N. Lorenz as one of the first examples of emph{strange attractors}. However Lorenz research was mainly based on (non-rigourous) numerical simulations and, until recently, the proof of the existence of the Lorenz attractor remained elusive. To address that problem some authors introduced geometric Lorenz models and proved that geometric Lorenz models have a strange attractor. In 2002 it was shown that the original Lorenz model behaves like a geometric Lorenz model and thus has a strange attractor. In this paper we show that geometric Lorenz attractors are computable, as well as their physical measures.
The $2$-closure $overline{G}$ of a permutation group $G$ on $Omega$ is defined to be the largest permutation group on $Omega$, having the same orbits on $OmegatimesOmega$ as $G$. It is proved that if $G$ is supersolvable, then $overline{G}$ can be found in polynomial time in $|Omega|$. As a byproduct of our technique, it is shown that the composition factors of $overline{G}$ are cyclic or alternating of prime degree.
We give an $n^{O(loglog n)}$-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over ${pm 1}^n$. Even in the realizable setting, the previous fastest runtime was $n^{O(log n)}$, a consequence of a classic algorithm of Ehrenfeucht and Haussler. Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of ODonnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be pruned so that every variable in the resulting tree is influential.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا