ترغب بنشر مسار تعليمي؟ اضغط هنا

Instruction sequences expressing multiplication algorithms

202   0   0.0 ( 0 )
 نشر من قبل Kees Middelburg
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

For each function on bit strings, its restriction to bit strings of any given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We describe instruction sequences of this kind that compute the function on bit strings that models multiplication on natural numbers less than $2^N$ with respect to their binary representation by bit strings of length $N$, for a fixed but arbitrary $N > 0$, according to the long multiplication algorithm and the Karatsuba multiplication algorithm. We find among other things that the instruction sequence expressing the former algorithm is longer than the one expressing the latter algorithm only if the length of the bit strings involved is greater than $2^8$. We also go into the use of an instruction sequence with backward jump instructions for expressing the long multiplication algorithm. This leads to an instruction sequence that it is shorter than the other two if the length of the bit strings involved is greater than $2$.



قيم البحث

اقرأ أيضاً

For each function on bit strings, its restriction to bit strings of any given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a te rmination instruction. Backward jump instructions are not necessary for this, but instruction sequences can be significantly shorter with them. We take the function on bit strings that models the multiplication of natural numbers on their representation in the binary number system to demonstrate this by means of a concrete example. The example is reason to discuss points concerning the halting problem and the concept of an algorithm.
A program is a finite piece of data that produces a (possibly infinite) sequence of primitive instructions. From scratch we develop a linear notation for sequential, imperative programs, using a familiar class of primitive instructions and so-called repeat instructions, a particular type of control instructions. The resulting mathematical structure is a semigroup. We relate this set of programs to program algebra (PGA) and show that a particular subsemigroup is a carrier for PGA by providing axioms for single-pass congruence, structural congruence, and thread extraction. This subsemigroup characterizes periodic single-pass instruction sequences and provides a direct basis for PGAs toolset.
In this paper, we study the phenomenon that instruction sequences are split into fragments which somehow produce a joint behaviour. In order to bring this phenomenon better into the picture, we formalize a simple mechanism by which several instructio n sequence fragments can produce a joint behaviour. We also show that, even in the case of this simple mechanism, it is a non-trivial matter to explain by means of a translation into a single instruction sequence what takes place on execution of a collection of instruction sequence fragments.
We study sequential programs that are instruction sequences with dynamically instantiated instructions. We define the meaning of such programs in two different ways. In either case, we give a translation by which each program with dynamically instant iated instructions is turned into a program without them that exhibits on execution the same behaviour by interaction with some service. The complexity of the translations differ considerably, whereas the services concerned are equally simple. However, the service concerned in the case of the simpler translation is far more powerful than the service concerned in the other case.
Earlier work on program and thread algebra detailed the functional, observable behavior of programs under execution. In this article we add the modeling of unobservable, mechanistic processing, in particular processing due to jump instructions. We mo del mechanistic processing preceding some further behavior as a delay of that behavior; we borrow a unary delay operator from discrete time process algebra. We define a mechanistic improvement ordering on threads and observe that some threads do not have an optimal implementation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا