ترغب بنشر مسار تعليمي؟ اضغط هنا

Toward a Dynamic Programming Solution for the 4-peg Tower of Hanoi Problem with Configurations

56   0   0.0 ( 0 )
 نشر من قبل Nicos Angelopoulos
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Frame-Stewart algorithm for the 4-peg variant of the Tower of Hanoi, introduced in 1941, partitions disks into intermediate towers before moving the remaining disks to their destination. Algorithms that partition the disks have not been proven to be optimal, although they have been verified for up to 30 disks. This paper presents a dynamic programming approach to this algorithm, using tabling in B-Prolog. This study uses a variation of the problem, involving configurations of disks, in order to contrast the tabling approach with the approaches utilized by other solvers. A comparison of different partitioning locations for the Frame-Stewart algorithm indicates that, although certain partitions are optimal for the classic problem, they need to be modified for certain configurations, and that random configurations might require an entirely new algorithm.

قيم البحث

اقرأ أيضاً

We consider the relationship between the Laplacians on two sequences of planar graphs, one from the theory of self-similar groups and one from analysis on fractals. By establishing a spectral decimation map between these sequences we give an elementa ry calculation of the spectrum of the former, which was first computed by Grigorchuk and v{S}uni{c}. Our method also gives a full description of the eigenfunctions.
This paper discusses the odds problem, proposed by Bruss in 2000, and its variants. A recurrence relation called a dynamic programming (DP) equation is used to find an optimal stopping policy of the odds problem and its variants. In 2013, Buchbinder, Jain, and Singh proposed a linear programming (LP) formulation for finding an optimal stopping policy of the classical secretary problem, which is a special case of the odds problem. The proposed linear programming problem, which maximizes the probability of a win, differs from the DP equations known for long time periods. This paper shows that an ordinary DP equation is a modification of the dual problem of linear programming including the LP formulation proposed by Buchbinder, Jain, and Singh.
We study the number of dimer-monomers $M_d(n)$ on the Tower of Hanoi graphs $TH_d(n)$ at stage $n$ with dimension $d$ equal to 3 and 4. The entropy per site is defined as $z_{TH_d}=lim_{v to infty} ln M_d(n)/v$, where $v$ is the number of vertices on $TH_d(n)$. We obtain the lower and upper bounds of the entropy per site, and the convergence of these bounds approaches to zero rapidly when the calculated stage increases. The numerical value of $z_{TH_d}$ is evaluated to more than a hundred digits correct. Using the results with $d$ less than or equal to 4, we predict the general form of the lower and upper bounds for $z_{TH_d}$ with arbitrary $d$.
156 - Kim Griest 2002
It is a mystery why the density of matter and the density of vacuum energy are nearly equal today when they scale so differently during the expansion of the Universe. We suggest a paradigm that might allow for a non-anthropic solution to this cosmic coincidence problem. The fact that the half life of Uranium 238 is very near to the age of the solar system is not considered a coincidence since there are many nuclides with half lives ranging over a huge range of time scales implying that there is likely to be some nuclide with a half life near to any given time scale. Likewise it may be that the vacuum field energy causing the universal acceleration today is just one of a large ensemble of scalar field energies, which have dominated the Universe in the past and then faded away. Given that in standard cosmology and particle physics there are already several scalar fields that probably contribute to universal vacuum energy (the Higgs field, the inflaton, and whatever quintessence/dark energy field causes the current acceleration), the idea of a large ensemble of fields does not seem far fetched. Predictions of the idea include: 1) The current vacuum energy driving the acceleration is not constant and will eventually fade away, 2) The ratio w of scalar field pressure to density is currently changing and is not precisely negative one, 3) There were likely periods of vacuum dominance and acceleration in the past, 4) the current phase of acceleration will end but there may be additional periods of acceleration in the future, 5) the ultimate fate of the Universe cannot be decided until the nature of these fields is known, but the eventual sum of densities from all scalar fields could be zero, as was usually assumed before the discovery of the current universal acceleration.
Programming-by-Example (PBE) systems synthesize an intended program in some (relatively constrained) domain-specific language from a small number of input-output examples provided by the user. In this paper, we motivate and define the problem of quan titative PBE (qPBE) that relates to synthesizing an intended program over an underlying (real world) programming language that also minimizes a given quantitative cost function. We present a modular approach for solving qPBE that consists of three phases: intent disambiguation, global search, and local search. On two concrete objectives, namely program performance and size, our qPBE procedure achieves $1.53 X$ and $1.26 X$ improvement respectively over the baseline FlashFill PBE system, averaged over $701$ benchmarks. Our detailed experiments validate the design of our procedure and show the value of combining global and local search for qPBE.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا