ترغب بنشر مسار تعليمي؟ اضغط هنا

Phosphorus donor spins in silicon offer a number of promising characteristics for the implementation of robust qubits. Amongst various concepts for scale-up, the shared-control concept takes advantage of 3D scanning tunnelling microscope (STM) fabric ation techniques to minimise the number of control lines, allowing the donors to be placed at the pitch limit of $geq$30 nm, enabling dipole interactions. A fundamental challenge is to exploit the faster exchange interaction, however, the donor spacings required are typically 15 nm or less, and the exchange interaction is notoriously sensitive to lattice site variations in donor placement. This work presents a proposal for a fast exchange-based surface-code quantum computer architecture which explicitly addresses both donor placement imprecision commensurate with the atomic-precision fabrication techniques and the stringent qubit pitch requirements. The effective pitch is extended by incorporation of an intermediate donor acting as an exchange-interaction switch. We consider both global control schemes and a scheduled series of operations by designing GRAPE pulses for individual CNOTs based on coupling scenarios predicted by atomistic tight-binding simulations. The architecture is compatible with the existing fabrication capabilities and may serve as a blueprint for the experimental implementation of a full-scale fault-tolerant quantum computer based on donor impurities in silicon.
Theoretical understanding of scanning tunnelling microscope (STM) measurements involve electronic structure details of the STM tip and the sample being measured. Conventionally, the focus has been on the accuracy of the electronic state simulations o f the sample, whereas the STM tip electronic state is typically approximated as a simple spherically symmetric $ s $ orbital. This widely used $ s $ orbital approximation has failed in recent STM studies where the measured STM images of subsurface impurity wave functions in silicon required a detailed description of the STM tip electronic state. In this work, we show that the failure of the $ s $ orbital approximation is due to the indirect band-gap of the sample material silicon (Si), which gives rise to complex valley interferences in the momentum space of impurity wave functions. Based on direct comparison of STM images computed from multi-million-atom electronic structure calculations of impurity wave functions in direct (GaAs) and indirect (Si) band-gap materials, our results establish that whilst the selection of STM tip orbital only plays a minor qualitative role for the direct band gap GaAs material, the STM measurements are dramatically modified by the momentum space features of the indirect band gap Si material, thereby requiring a quantitative representation of the STM tip orbital configuration. Our work provides new insights to understand future STM studies of semiconductor materials based on their momentum space features, which will be important for the design and implementation of emerging technologies in the areas of quantum computing, photonics, spintronics and valleytronics.
Object recognition from live video streams comes with numerous challenges such as the variation in illumination conditions and poses. Convolutional neural networks (CNNs) have been widely used to perform intelligent visual object recognition. Yet, CN Ns still suffer from severe accuracy degradation, particularly on illumination-variant datasets. To address this problem, we propose a new CNN method based on orientation fusion for visual object recognition. The proposed cloud-based video analytics system pioneers the use of bi-dimensional empirical mode decomposition to split a video frame into intrinsic mode functions (IMFs). We further propose these IMFs to endure Reisz transform to produce monogenic object components, which are in turn used for the training of CNNs. Past works have demonstrated how the object orientation component may be used to pursue accuracy levels as high as 93%. Herein we demonstrate how a feature-fusion strategy of the orientation components leads to further improving visual recognition accuracy to 97%. We also assess the scalability of our method, looking at both the number and the size of the video streams under scrutiny. We carry out extensive experimentation on the publicly available Yale dataset, including also a self generated video datasets, finding significant improvements (both in accuracy and scale), in comparison to AlexNet, LeNet and SE-ResNeXt, which are the three most commonly used deep learning models for visual object recognition and classification.
In this research a novel stochastic gradient descent based learning approach for the radial basis function neural networks (RBFNN) is proposed. The proposed method is based on the q-gradient which is also known as Jackson derivative. In contrast to t he conventional gradient, which finds the tangent, the q-gradient finds the secant of the function and takes larger steps towards the optimal solution. The proposed $q$-RBFNN is analyzed for its convergence performance in the context of least square algorithm. In particular, a closed form expression of the Wiener solution is obtained, and stability bounds of the learning rate (step-size) is derived. The analytical results are validated through computer simulation. Additionally, we propose an adaptive technique for the time-varying $q$-parameter to improve convergence speed with no trade-offs in the steady state performance.
We present NNrepair, a constraint-based technique for repairing neural network classifiers. The technique aims to fix the logic of the network at an intermediate layer or at the last layer. NNrepair first uses fault localization to find potentially f aulty network parameters (such as the weights) and then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects. We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class. We demonstrate the technique in the context of three different scenarios: (1) Improving the overall accuracy of a model, (2) Fixing security vulnerabilities caused by poisoning of training data and (3) Improving the robustness of the network against adversarial attacks. Our evaluation on MNIST and CIFAR-10 models shows that NNrepair can improve the accuracy by 45.56 percentage points on poisoned data and 10.40 percentage points on adversarial data. NNrepair also provides small improvement in the overall accuracy of models, without requiring new data or re-training.
Large-area crop classification using multi-spectral imagery is a widely studied problem for several decades and is generally addressed using classical Random Forest classifier. Recently, deep convolutional neural networks (DCNN) have been proposed. H owever, these methods only achieved results comparable with Random Forest. In this work, we present a novel CNN based architecture for large-area crop classification. Our methodology combines both spatio-temporal analysis via 3D CNN as well as temporal analysis via 1D CNN. We evaluated the efficacy of our approach on Yolo and Imperial county benchmark datasets. Our combined strategy outperforms both classical as well as recent DCNN based methods in terms of classification accuracy by 2% while maintaining a minimum number of parameters and the lowest inference time.
This paper presents NEUROSPF, a tool for the symbolic analysis of neural networks. Given a trained neural network model, the tool extracts the architecture and model parameters and translates them into a Java representation that is amenable for analy sis using the Symbolic PathFinder symbolic execution tool. Notably, NEUROSPF encodes specialized peer classes for parsing the models parameters, thereby enabling efficient analysis. With NEUROSPF the user has the flexibility to specify either the inputs or the network internal parameters as symbolic, promoting the application of program analysis and testing approaches from software engineering to the field of machine learning. For instance, NEUROSPF can be used for coverage-based testing and test generation, finding adversarial examples and also constraint-based repair of neural networks, thus improving the reliability of neural networks and of the applications that use them. Video URL: https://youtu.be/seal8fG78LI
Obesity and being over-weight add to the risk of some major life threatening diseases. According to W.H.O., a considerable population suffers from these disease whereas poor nutrition plays an important role in this context. Traditional food activity monitoring systems like Food Diaries allow manual record keeping of eating activities over time, and conduct nutrition analysis. However, these systems are prone to the problems of manual record keeping and biased-reporting. Therefore, recently, the research community has focused on designing automatic food monitoring systems since the last decade which consist of one or multiple wearable sensors. These systems aim at providing different macro and micro activity detections like chewing, swallowing, eating episodes, and food types as well as estimations like food mass and eating duration. Researchers have emphasized on high detection accuracy, low estimation errors, un-intrusive nature, low cost and real life implementation while designing these systems, however a comprehensive automatic food monitoring system has yet not been developed. Moreover, according to the best of our knowledge, there is no comprehensive survey in this field that delineates the automatic food monitoring paradigm, covers a handful number of research studies, analyses these studies against food intake monitoring tasks using various parameters, enlists the limitations and sets up future directions. In this research work, we delineate the automatic food intake monitoring paradigm and present a survey of research studies. With special focus on studies with wearable sensors, we analyze these studies against food activity monitoring tasks. We provide brief comparison of these studies along with shortcomings based upon experimentation results conducted under these studies. We setup future directions at the end to facilitate the researchers working in this domain.
Floorplans are commonly used to represent the layout of buildings. In computer aided-design (CAD) floorplans are usually represented in the form of hierarchical graph structures. Research works towards computational techniques that facilitate the des ign process, such as automated analysis and optimization, often use simple floorplan representations that ignore the semantics of the space and do not take into account usage related analytics. We present a floorplan embedding technique that uses an attributed graph to represent the geometric information as well as design semantics and behavioral features of the inhabitants as node and edge attributes. A Long Short-Term Memory (LSTM) Variational Autoencoder (VAE) architecture is proposed and trained to embed attributed graphs as vectors in a continuous space. A user study is conducted to evaluate the coupling of similar floorplans retrieved from the embedding space with respect to a given input (e.g., design layout). The qualitative, quantitative and user-study evaluations show that our embedding framework produces meaningful and accurate vector representations for floorplans. In addition, our proposed model is a generative model. We studied and showcased its effectiveness for generating new floorplans. We also release the dataset that we have constructed and which, for each floorplan, includes the design semantics attributes as well as simulation generated human behavioral features for further study in the community.
84 - Muhammad Usman 2020
The internet of things refers to the network of devices connected to the internet and can communicate with each other. The term things is to refer non-conventional devices that are usually not connected to the internet. The network of such devices or things is growing at an enormous rate. The security and privacy of the data flowing through these things is a major concern. The devices are low powered and the conventional encryption algorithms are not suitable to be employed on these devices. In this correspondence a survey of the contemporary lightweight encryption algorithms suitable for use in the IoT environment has been presented.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا