Do you want to publish a course? Click here

Verification of Security Protocols Using Formal Methods

التحقق من البروتوكولات الأمنية باستخدام الطرق الرسمية

1259   0   132   0 ( 0 )
 Publication date 2017
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

There are many of Formal Methods for testing security protocols detecting being safe or not. Including Avispa, Casper, ProVerif, Scyther. Previously a comparisons using two of mentioned methods (ProVerif, Scyther). In this, research a comparison between the four mentioned methods in terms of the same used parameters in the previous comparison: working style, the modeling language, user interface, input, and output. As a result, the user provided with options to choose the appropriate method depending on the desired parameter. Six different of security protocols have been tested and finally the results have been compared; these protocols are Kao Chow Authentication Protocol, 3-D Secure Protocol, Needham-Schroeder Public Key Protocol, Diffie–Hellman key exchange, Andrew Secure RPC Protocol, and Challenge Handshake Authentication Protocol

References used
ALSHEHRI, A. A. NFC mobile coupon protocols: developing, formal security modelling and analysis, and addressing relay attack (Doctoral dissertation, University of Surrey). (2015)
BIONDI, F., & LEGAY, A. Security and Privacy of Protocols and Software with Formal Methods. In International Symposium on Leveraging Applications of Formal Methods (pp. 883-892). Springer International Publishing. (2016)
BLANCHET, B., SMYTH, B., & CHEVAL, V. (2015). ProVerif 1.90: Automatic Cryptographic Protocol Verifier, User Manual and Tutorial. URL: http://prosecco.gforge.inria.fr/personal/bblanche/proverif/manual.pdf online last accessed on 22-7-2016
rate research

Read More

This research suggests a new method that aims to verify the manual signature image which is written by person, and specify whether this signature back to this person or that forged signature. This was done by extracting geometric features of the sign ature image and applying statistical functions on them as a way to verify the signature of that person. The features from the signature image have been extracted on many stages so a signature image has been transformed from the gray scale to binary format, and then extracting the statistical features from the original signature image which is the maximum value from the most repeated values in the ones' coordination line that determine the signature shape, in addition to the number of ones which also determine the signature shape. Finally two ranges have been identified for the values accepted for original signature image. By the same way, statistical features have been extracted from the foreign signature image and tested if they aggregate within the specified domain of acceptable values. This research also includes the results of the proposed approach that compared with the previous methods in this scope. The proposed method has been tested to the data base consisting of 16200 signatures back to 300 persons, and as a result the signature image has been verified with a good percentage.
We present a generic method to compute thefactual accuracy of a generated data summarywith minimal user effort. We look at the prob-lem as a fact-checking task to verify the nu-merical claims in the text. The verification al-gorithm assumes that the data used to generatethe text is available. In this paper, we describehow the proposed solution has been used toidentify incorrect claims about basketball tex-tual summaries in the context of the AccuracyShared Task at INLG 2021.
In this paper, we propose a novel fact checking and verification system to check claims against Wikipedia content. Our system retrieves relevant Wikipedia pages using Anserini, uses BERT-large-cased question answering model to select correct evidence , and verifies claims using XLNET natural language inference model by comparing it with the evidence. Table cell evidence is obtained through looking for entity-matching cell values and TAPAS table question answering model. The pipeline utilizes zero-shot capabilities of existing models and all the models used in the pipeline requires no additional training. Our system got a FEVEROUS score of 0.06 and a label accuracy of 0.39 in FEVEROUS challenge.
This work describes the adaptation of a pretrained sequence-to-sequence model to the task of scientific claim verification in the biomedical domain. We propose a system called VerT5erini that exploits T5 for abstract retrieval, sentence selection, an d label prediction, which are three critical sub-tasks of claim verification. We evaluate our pipeline on SciFACT, a newly curated dataset that requires models to not just predict the veracity of claims but also provide relevant sentences from a corpus of scientific literature that support the prediction. Empirically, our system outperforms a strong baseline in each of the three sub-tasks. We further show VerT5erini's ability to generalize to two new datasets of COVID-19 claims using evidence from the CORD-19 corpus.
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا