No Arabic abstract
The Operational Transformation (OT) approach, used in many collaborative editors, allows a group of users to concurrently update replicas of a shared object and exchange their updates in any order. The basic idea of this approach is to transform any received update operation before its execution on a replica of the object. This transformation aims to ensure the convergence of the different replicas of the object, even though the operations are executed in different orders. However, designing transformation functions for achieving convergence is a critical and challenging issue. Indeed, the transformation functions proposed in the literature are all revealed incorrect. In this paper, we investigate the existence of transformation functions for a shared string altered by insert and delete operations. From the theoretical point of view, two properties - named TP1 and TP2 - are necessary and sufficient to ensure convergence. Using controller synthesis technique, we show that there are some transformation functions which satisfy only TP1 for the basic signatures of insert and delete operations. As a matter of fact, it is impossible to meet both properties TP1 and TP2 with these simple signatures.
OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become a core technique used in todays working co-editors and adopted in major industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as post-OT techniques capable of making concurrent operations natively commutative in co-editors. On top of that, CRDT solutions have made broad claims of superiority over OT solutions, and routinely portrayed OT as an incorrect, complex and inefficient technique. Over one decade later, however, OT remains the choice for building the vast majority of co-editors, whereas CRDT is rarely found in working co-editors. Contradictions between the reality and CRDTs purported advantages have been the source of much confusion and debate in co-editing communities. Have the vast majority of co-editors been unfortunate in choosing the faulty and inferior OT, or those CRDT claims are false? What are the real differences between OT and CRDT for co-editors? What are the key factors and underlying reasons behind the choices between OT and CRDT in the real world? To seek truth from facts, we set out to conduct a comprehensive and critical review on representative OT and CRDT solutions and working co-editors based on them. From this work, we have made important discoveries about OT and CRDT, and revealed facts and evidences that refute CRDT claims over OT on all accounts. We report our discoveries in a series of three articles and the current article is the first one in this series. We hope the discoveries from this work help clear up common misconceptions and confusions surrounding OT and CRDT, and accelerate progress in co-editing technology for real world applications.
The identifiability of a system is concerned with whether the unknown parameters in the system can be uniquely determined with all the possible data generated by a certain experimental setting. A test of quantum Hamiltonian identifiability is an important tool to save time and cost when exploring the identification capability of quantum probes and experimentally implementing quantum identification schemes. In this paper, we generalize the identifiability test based on the Similarity Transformation Approach (STA) in classical control theory and extend it to the domain of quantum Hamiltonian identification. We employ STA to prove the identifiability of spin-1/2 chain systems with arbitrary dimension assisted by single-qubit probes. We further extend the traditional STA method by proposing a Structure Preserving Transformation (SPT) method for non-minimal systems. We use the SPT method to introduce an indicator for the existence of economic quantum Hamiltonian identification algorithms, whose computational complexity directly depends on the number of unknown parameters (which could be much smaller than the system dimension). Finally, we give an example of such an economic Hamiltonian identification algorithm and perform simulations to demonstrate its effectiveness.
With increasing linkage within value chains, the IT systems of different companies are also being connected with each other. This enables the integration of services within the movement of Industry 4.0 in order to improve the quality and performance of the processes. Enterprise architecture models form the basis for this with a better buisness IT-alignment. However, the heterogeneity of the modeling frameworks and description languages makes a concatenation considerably difficult, especially differences in syntax, semantic and relations. Therefore, this paper presents a transformation engine to convert enterprise architecture models between several languages. We developed the first generic translation approach that is free of specific meta-modeling, which is flexible adaptable to arbitrary modeling languages. The transformation process is defined by various pattern matching techniques using a rule-based description language. It uses set theory and first-order logic for an intuitive description as a basis. The concept is practical evaluated using an example in the area of a large German IT-service provider. Anyhow, the approach is applicable between a wide range of enterprise architecture frameworks.
We tackle the problem of modeling sequential visual phenomena. Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence. Furthermore, we aim to do this without ground-truth aligned sequences -- avoiding the difficulties needed for gathering aligned data. This generalizes the unpaired image-to-image problem from generating pairs to generating sequences. We extend cycle consistency to loop consistency and alleviate difficulties associated with learning in the resulting long chains of computation. We show competitive results compared to existing image-to-image techniques when modeling several different data sets including the Earths seasons and aging of human faces.
Given two random variables $X$ and $Y$, an operational approach is undertaken to quantify the ``leakage of information from $X$ to $Y$. The resulting measure $mathcal{L}(X !! to !! Y)$ is called emph{maximal leakage}, and is defined as the multiplicative increase, upon observing $Y$, of the probability of correctly guessing a randomized function of $X$, maximized over all such randomized functions. A closed-form expression for $mathcal{L}(X !! to !! Y)$ is given for discrete $X$ and $Y$, and it is subsequently generalized to handle a large class of random variables. The resulting properties are shown to be consistent with an axiomatic view of a leakage measure, and the definition is shown to be robust to variations in the setup. Moreover, a variant of the Shannon cipher system is studied, in which performance of an encryption scheme is measured using maximal leakage. A single-letter characterization of the optimal limit of (normalized) maximal leakage is derived and asymptotically-optimal encryption schemes are demonstrated. Furthermore, the sample complexity of estimating maximal leakage from data is characterized up to subpolynomial factors. Finally, the emph{guessing} framework used to define maximal leakage is used to give operational interpretations of commonly used leakage measures, such as Shannon capacity, maximal correlation, and local differential privacy.