Do you want to publish a course? Click here

Protecting others vs. protecting yourself against ballistic droplets: Quantification by stain patterns

73   0   0.0 ( 0 )
 Added by Ernesto Altshuler
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

It is often accepted a priori that a face mask worn by an infected subject is effective to avoid the spreading of a respiratory disease, while a healthy person is not necessarily well protected when wearing the mask. Using a frugal stain technique, we quantify the ballistic droplets reaching a receptor from a jet-emitting source which mimics a coughing, sneezing or talking human: in real life, such droplets may host active SARS-CoV-2 virus able to replicate in the nasopharynx. We demonstrate that materials often used in home-made face masks block most of the droplets. We also show quantitatively that less liquid carried by ballistic droplets reaches a receptor when a blocking material is deployed near the source than when located near the receptor, which supports the paradigm that your face mask does protect you, but protects others even better than you.



rate research

Read More

Quantum error correcting codes (QECCs) are the means of choice whenever quantum systems suffer errors, e.g., due to imperfect devices, environments, or faulty channels. By now, a plethora of families of codes is known, but there is no universal approach to finding new or optimal codes for a certain task and subject to specific experimental constraints. In particular, once found, a QECC is typically used in very diverse contexts, while its resilience against errors is captured in a single figure of merit, the distance of the code. This does not necessarily give rise to the most efficient protection possible given a certain known error or a particular application for which the code is employed. In this paper, we investigate the loss channel, which plays a key role in quantum communication, and in particular in quantum key distribution over long distances. We develop a numerical set of tools that allows to optimize an encoding specifically for recovering lost particles without the need for backwards communication, where some knowledge about what was lost is available, and demonstrate its capabilities. This allows us to arrive at new codes ideal for the distribution of entangled states in this particular setting, and also to investigate if encoding in qudits or allowing for non-deterministic correction proves advantageous compared to known QECCs. While we here focus on the case of losses, our methodology is applicable whenever the errors in a system can be characterized by a known linear map.
We consider quantum error-correction codes for multimode bosonic systems, such as optical fields, that are affected by amplitude damping. Such a process is a generalization of an erasure channel. We demonstrate that the most accessible method of transforming optical systems with the help of passive linear networks has limited usefulness in preparing and manipulating such codes. These limitations stem directly from the recoverability condition for one-photon loss. We introduce a three-photon code protecting against the first order of amplitude damping, i.e. a single photon loss, and discuss its preparation using linear optics with single-photon sources and conditional detection. Quantum state and process tomography in the code subspace can be implemented using passive linear optics and photon counting. An experimental proof-of-principle demonstration of elements of the proposed quantum error correction scheme for a one-photon erasure lies well within present technological capabilites.
Routing attacks remain practically effective in the Internet today as existing countermeasures either fail to provide protection guarantees or are not easily deployable. Blockchain systems are particularly vulnerable to such attacks as they rely on Internet-wide communication to reach consensus. In particular, Bitcoin -the most widely-used cryptocurrency- can be split in half by any AS-level adversary using BGP hijacking. In this paper, we present SABRE, a secure and scalable Bitcoin relay network which relays blocks worldwide through a set of connections that are resilient to routing attacks. SABRE runs alongside the existing peer-to-peer network and is easily deployable. As a critical system, SABRE design is highly resilient and can efficiently handle high bandwidth loads, including Denial of Service attacks. We built SABRE around two key technical insights. First, we leverage fundamental properties of inter-domain routing (BGP) policies to host relay nodes: (i) in locations that are inherently protected against routing attacks; and (ii) on paths that are economically preferred by the majority of Bitcoin clients. These properties are generic and can be used to protect other Blockchain-based systems. Second, we leverage the fact that relaying blocks is communication-heavy, not computation-heavy. This enables us to offload most of the relay operations to programmable network hardware (using the P4 programming language). Thanks to this hardware/software co-design, SABRE nodes operate seamlessly under high load while mitigating the effects of malicious clients. We present a complete implementation of SABRE together with an extensive evaluation. Our results demonstrate that SABRE is effective at securing Bitcoin against routing attacks, even with deployments as small as 6 nodes.
There is growing concern about how personal data are used when users grant applications direct access to the sensors of their mobile devices. In fact, high resolution temporal data generated by motion sensors reflect directly the activities of a user and indirectly physical and demographic attributes. In this paper, we propose a feature learning architecture for mobile devices that provides flexible and negotiable privacy-preserving sensor data transmission by appropriately transforming raw sensor data. The objective is to move from the current binary setting of granting or not permission to an application, toward a model that allows users to grant each application permission over a limited range of inferences according to the provided services. The internal structure of each component of the proposed architecture can be flexibly changed and the trade-off between privacy and utility can be negotiated between the constraints of the user and the underlying application. We validated the proposed architecture in an activity recognition application using two real-world datasets, with the objective of recognizing an activity without disclosing gender as an example of private information. Results show that the proposed framework maintains the usefulness of the transformed data for activity recognition, with an average loss of only around three percentage points, while reducing the possibility of gender classification to around 50%, the target random guess, from more than 90% when using raw sensor data. We also present and distribute MotionSense, a new dataset for activity and attribute recognition collected from motion sensors.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا