Do you want to publish a course? Click here

Improving Reliability for Bit-Flipping Algorithm for Decoding Low Density Parity Check Convolutional Codes

تحدين وثوقية خوارزمية قلب البت لفك تشفير شيفرات فحص التكافؤ منخفضة الكثافة الالتفافية

802   0   9   0 ( 0 )
 Publication date 2016
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

The Low-Density Parity Check Convolutional Codes (LDPC-CC) are considered of forward error correction codes, which combine the power of LDPC block codes and Convolutional codes. LDPCCC have several features such as they can be encoded with arbitrary length by simple shift registers and can be decode by only one decoder.

References used
KRZYSZTOF WESOŁOWSKI, POZNA´N, 2009 -Introduction To Digital Communication Systems. John Wiley & Sons Ltd
DANIEL J. COSTELLO JR., ARVIND SRIDHARAN, AND DEEPAK SRIDHARA, 2001-Low Density Parity Check Convolutional Codes Derived from Quasi-Cyclic Block Codes. Department of Electrical Engineering. University of Notre Dame USA
K. ENGDAHL AND K. SH. ZIGANGIROV,1998-On the Theory of Low-Density Convolutional Codes I . Pskov, Russia
rate research

Read More

The process of transfer a speech signal by high confidentially and as quickly as possible through the Internet needs to develop compression and encryption technology for a speech signal, so as, to reduce its size and make it understandable to persons not authorized to listen to. A system was designed to encrypt the voice over Internet Protocol (VoIP) and use compression technique for the purpose of reducing the size of data and send it over the network, (A_law PCM) algorithm was used the to compress audio data. Then algorithms of Triple Data Encryption Standard (TDES) and Advanced. Encryption Standard (AES) were applied. A new encryption algorithm was proposed based in its work on the block cipher encryption system called the Direct and Reverse algorithm, which based on three basic steps, firstly expand the initial key, secondly direct the encryption of each round in one direction, and finally substitute (Bytes) as used in the Compensation Box in AES algorithm by making it moving. In general compression ratio was calculated and it was (50%) and the results of the correlation coefficient for the proposed algorithm was compared with the results of (AES, TDES) algorithms.
Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generatio n, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters---specifically, maximum mutual information---analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.
This paper introduces a new algorithm to solve some problems that data clustering algorithms such as K-Means suffer from. This new algorithm by itself is able to cluster data without the need of other clustering algorithms.
Although grammatical error correction (GEC) has achieved good performance on texts written by learners of English as a second language, performance on low error density domains where texts are written by English speakers of varying levels of proficie ncy can still be improved. In this paper, we propose a contrastive learning approach to encourage the GEC model to assign a higher probability to a correct sentence while reducing the probability of incorrect sentences that the model tends to generate, so as to improve the accuracy of the model. Experimental results show that our approach significantly improves the performance of GEC models in low error density domains, when evaluated on the benchmark CWEB dataset.
Entity linking is an important problem with many applications. Most previous solutions were designed for settings where annotated training data is available, which is, however, not the case in numerous domains. We propose a light-weight and scalable entity linking method, Eigenthemes, that relies solely on the availability of entity names and a referent knowledge base. Eigenthemes exploits the fact that the entities that are truly mentioned in a document (the gold entities'') tend to form a semantically dense subset of the set of all candidate entities in the document. Geometrically speaking, when representing entities as vectors via some given embedding, the gold entities tend to lie in a low-rank subspace of the full embedding space. Eigenthemes identifies this subspace using the singular value decomposition and scores candidate entities according to their proximity to the subspace. On the empirical front, we introduce multiple strong baselines that compare favorably to (and sometimes even outperform) the existing state of the art. Extensive experiments on benchmark datasets from a variety of real-world domains showcase the effectiveness of our approach.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا