ترغب بنشر مسار تعليمي؟ اضغط هنا

Instruction sequence expressions for the secure hash algorithm SHA-256

278   0   0.0 ( 0 )
 نشر من قبل Kees Middelburg
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The secure hash function SHA-256 is a function on bit strings. This means that its restriction to the bit strings of any given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We describe such instruction sequences for the restrictions to bit strings of the different possible lengths by means of uniform terms from an algebraic theory.

قيم البحث

اقرأ أيضاً

79 - Robert Sison 2020
Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiti ng itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics -- preserving compilation, our notions include a decomposition principle that separates the semantics -- from the security-preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a nontrivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. (See paper for complete abstract.)
85 - Jan A. Bergstra 2019
The number of instructions of an instruction sequence is taken for its logical SLOC, and is abbreviated with LLOC. A notion of quantitative expressiveness is based on LLOC and in the special case of operation over a family of single bit registers a c ollection of elementary properties are established. A dedicated notion of interface is developed and is used for stating relevant properties of classes of instruction sequences
Discussions about the choice of a tree hash mode of operation for a standardization have recently been undertaken. It appears that a single tree mode cannot address adequately all possible uses and specifications of a system. In this paper, we review the tree modes which have been proposed, we discuss their problems and propose remedies. We make the reasonable assumption that communicating systems have different specifications and that software applications are of different types (securing stored content or live-streamed content). Finally, we propose new modes of operation that address the resource usage problem for the three most representative categories of devices and we analyse their asymptotic behavior.
Compiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.
Privacy amplification (PA) is an essential part in a quantum key distribution (QKD) system, distilling a highly secure key from a partially secure string by public negotiation between two parties. The optimization objectives of privacy amplification for QKD are large block size, high throughput and low cost. For the global optimization of these objectives, a novel privacy amplification algorithm is proposed in this paper by combining multilinear-modular-hashing and modular arithmetic hashing. This paper proves the security of this hybrid hashing PA algorithm within the framework of both information theory and composition security theory. A scheme based on this algorithm is implemented and evaluated on a CPU platform. The results on a typical CV-QKD system indicate that the throughput of this scheme ([email protected]*10^8 input block size) is twice higher than the best existing scheme (140Mbps@1*10^8 input block size). Moreover, This scheme is implemented on a mobile CPU platform instead of a desktop CPU or a server CPU, which means that this algorithm has a better performance with a much lower cost and power consumption.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا