No Arabic abstract
In this article, we propose a new construction of probabilistic collusion-secure fingerprint codes against up to three pirates and give a theoretical security evaluation. Our pirate tracing algorithm combines a scoring method analogous to Tardos codes (J. ACM, 2008) with an extension of parent search techniques of some preceding 2-secure codes. Numerical examples show that our code lengths are significantly shorter than (about 30% to 40% of) the shortest known c-secure codes by Nuida et al. (Des. Codes Cryptogr., 2009) with c = 3. Some preliminary proposal for improving efficiency of our tracing algorithm is also given.
We present three private fingerprint alignment and matching protocols, based on what are considered to be the most precise and efficient fingerprint recognition algorithms, which use minutia points. Our protocols allow two or more honest-but-curious parties to compare their respective privately-held fingerprints in a secure way such that they each learn nothing more than an accurate score of how well the fingerprints match. To the best of our knowledge, this is the first time fingerprint alignment based on minutiae is considered in a secure computation framework. We build secure fingerprint alignment and matching protocols in both the two-party setting using garbled circuit evaluation and in the multi-party setting using secret sharing techniques. In addition to providing precise and efficient secure fingerprint alignment and matching, our contributions include the design of a number of secure sub-protocols for complex operations such as sine, cosine, arctangent, square root, and selection, which are likely to be of independent interest.
On todays Internet, combining the end-to-end security of TLS with Content Delivery Networks (CDNs) while ensuring the authenticity of connections results in a challenging delegation problem. When CDN servers provide content, they have to authenticate themselves as the origin server to establish a valid end-to-end TLS connection with the client. In standard TLS, the latter requires access to the secret key of the server. To curb this problem, multiple workarounds exist to realize a delegation of the authentication. In this paper, we present a solution that renders key sharing unnecessary and reduces the need for workarounds. By adapting identity-based signatures to this setting, our solution offers short-lived delegations. Additionally, by enabling forward-security, existing delegations remain valid even if the servers secret key leaks. We provide an implementation of the scheme and discuss integration into a TLS stack. In our evaluation, we show that an efficient implementation incurs less overhead than a typical network round trip. Thereby, we propose an alternative approach to current delegation practices on the web.
Federated learning has become prevalent in medical diagnosis due to its effectiveness in training a federated model among multiple health institutions (i.e. Data Islands (DIs)). However, increasingly massive DI-level poisoning attacks have shed light on a vulnerability in federated learning, which inject poisoned data into certain DIs to corrupt the availability of the federated model. Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model. In this paper, we design a secure federated learning mechanism with multiple keys to prevent DI-level poisoning attacks for medical diagnosis, called SFPA. Concretely, SFPA provides privacy-preserving random forest-based federated learning by using the multi-key secure computation, which guarantees the confidentiality of DI-related information. Meanwhile, a secure defense strategy over encrypted locally-submitted models is proposed to defense DI-level poisoning attacks. Finally, our formal security analysis and empirical tests on a public cloud platform demonstrate the security and efficiency of SFPA as well as its capability of resisting DI-level poisoning attacks.
Elaborate protocols in Secure Multi-party Computation enable several participants to compute a public function of their own private inputs while ensuring that no undesired information leaks about the private inputs, and without resorting to any trusted third party. However, the public output of the computation inevitably leaks some information about the private inputs. Recent works have introduced a framework and proposed some techniques for quantifying such information flow. Yet, owing to their complexity, those methods do not scale to practical situations that may involve large input spaces. The main contribution of the work reported here is to formally investigate the information flow captured by the min-entropy in the particular case of secure three-party computations of affine functions in order to make its quantification scalable to realistic scenarios. To this end, we mathematically derive an explicit formula for this entropy under uniform prior beliefs about the inputs. We show that this closed-form expression can be computed in time constant in the inputs sizes and logarithmic in the coefficients of the affine function. Finally, we formulate some theoretical bounds for this privacy leak in the presence of non-uniform prior beliefs.
In order to receive personalized services, individuals share their personal data with a wide range of service providers, hoping that their data will remain confidential. Thus, in case of an unauthorized distribution of their personal data by these service providers (or in case of a data breach) data owners want to identify the source of such data leakage. Digital fingerprinting schemes have been developed to embed a hidden and unique fingerprint into shared digital content, especially multimedia, to provide such liability guarantees. However, existing techniques utilize the high redundancy in the content, which is typically not included in personal data. In this work, we propose a probabilistic fingerprinting scheme that efficiently generates the fingerprint by considering a fingerprinting probability (to keep the data utility high) and publicly known inherent correlations between data points. To improve the robustness of the proposed scheme against colluding malicious service providers, we also utilize the Boneh-Shaw fingerprinting codes as a part of the proposed scheme. Furthermore, observing similarities between privacy-preserving data sharing techniques (that add controlled noise to the shared data) and the proposed fingerprinting scheme, we make a first attempt to develop a data sharing scheme that provides both privacy and fingerprint robustness at the same time. We experimentally show that fingerprint robustness and privacy have conflicting objectives and we propose a hybrid approach to control such a trade-off with a design parameter. Using the proposed hybrid approach, we show that individuals can improve their level of privacy by slightly compromising from the fingerprint robustness. We implement and evaluate the performance of the proposed scheme on real genomic data. Our experimental results show the efficiency and robustness of the proposed scheme.