No Arabic abstract
Hot-filament process was recently employed to convert, totally or partially, few-layer graphene (FLG) with Bernal stacking into crystalline sp$^3$-C sheets at low pressure. Those materials constitute new synthetic carbon nanoforms. The result reported earlier relies on Raman spectroscopy and Fourier transform infrared microscopy. As soon as the number of graphene layers in the starting FLG is higher than 2-3, the sp$^2$-C to sp$^3$-C conversion tends to be partial only. We hereby report new evidences confirming the sp$^2$-C to sp$^3$-C conversion from electron diffraction at low energy,Raman spectroscopy and Density Functional Theory (DFT) calculations. Partial sp$^2$-C to sp$^3$-C conversion generates couples of twisted, superimposed coherent domains (TCD), supposedly because of stress relaxation, which are evidenced by electron diffraction and Raman spectroscopy. TCDs come with the occurrence of a twisted bilayer graphene feature located at the interface between the upper diamanoid domain and the non-converted graphenic domain underneath, as evidenced by a specific Raman signature consistent with the literature. DFT calculations show that the up-to-now poorly understood Raman T peak originates from a sp$^2$-C-sp$^3$-C mixt layer located between a highly hydrogenated sp$^3$-C surface layer and an underneath graphene layer.
This manuscript presents the general approach to the understanding of the connection between bonding mechanism and electronic structure of graphene on metals. To demonstrate its validity, two limiting cases of the weakly and strongly bonded graphene on Al(111) and Ni(111) are considered, where the Dirac cone is preserved or fully destroyed, respectively. Furthermore, the electronic structure, i. e. doping level, hybridization effects, as well as a gap formation at the Dirac point of the intermediate system, graphene/Cu(111), is fully understood in the framework of the proposed approach. This work summarises the long-term debates regarding connection of the bonding strength and the valence band modification in the graphene/metal systems and paves a way for the effective control of the electronic states of graphene in the vicinity of the Fermi level.
Plasmonic excitations such as surface-plasmon-polaritons (SPPs) and graphene-plasmons (GPs), carry large momenta and are thus able to confine electromagnetic fields to small dimensions. This property makes them ideal platforms for subwavelength optical control and manipulation at the nanoscale. The momenta of these plasmons are even further increased if a scheme of metal-insulator-metal and graphene-insulator-metal are used for SPPs and GPs, respectively. However, with such large momenta, their far-field excitation becomes challenging. In this work, we consider hybrids of graphene and metallic nanostructures and study the physical mechanisms behind the interaction of far-field light with the supported high momenta plasmon modes. While there are some similarities in the properties of GPs and SPPs, since both are of the plasmon-polariton type, their physical properties are also distinctly different. For GPs we find two different physical mechanism related to either GPs confined to isolated cavities, or large area collective grating couplers. Strikingly, we find that although the two systems are conceptually different, under specific conditions they can behave similarly. By applying the same study to SPPs, we find a different physical behavior, which fundamentally stems from the different dispersion relations of SPPs as compared to GPs. Furthermore, these hybrids produce large field enhancements that can also be electrically tuned and modulated making them the ideal candidates for a variety of plasmonic devices.
How should social scientists understand and communicate the uncertainty of statistically estimated causal effects? It is well-known that the conventional significance-vs.-insignificance approach is associated with misunderstandings and misuses. Behavioral research suggests people understand uncertainty more appropriately in a numerical, continuous scale than in a verbal, discrete scale. Motivated by these backgrounds, I propose presenting the probabilities of different effect sizes. Probability is an intuitive continuous measure of uncertainty. It allows researchers to better understand and communicate the uncertainty of statistically estimated effects. In addition, my approach needs no decision threshold for an uncertainty measure or an effect size, unlike the conventional approaches, allowing researchers to be agnostic about a decision threshold such as p<5% and a justification for that. I apply my approach to a previous social scientific study, showing it enables richer inference than the significance-vs.-insignificance approach taken by the original study. The accompanying R package makes my approach easy to implement.
Feature based local attribution methods are amongst the most prevalent in explainable artificial intelligence (XAI) literature. Going beyond standard correlation, recently, methods have been proposed that highlight what should be minimally sufficient to justify the classification of an input (viz. pertinent positives). While minimal sufficiency is an attractive property, the resulting explanations are often too sparse for a human to understand and evaluate the local behavior of the model, thus making it difficult to judge its overall quality. To overcome these limitations, we propose a novel method called Path-Sufficient Explanations Method (PSEM) that outputs a sequence of sufficient explanations for a given input of strictly decreasing size (or value) -- from original input to a minimally sufficient explanation -- which can be thought to trace the local boundary of the model in a smooth manner, thus providing better intuition about the local model behavior for the specific input. We validate these claims, both qualitatively and quantitatively, with experiments that show the benefit of PSEM across all three modalities (image, tabular and text). A user study depicts the strength of the method in communicating the local behavior, where (many) users are able to correctly determine the prediction made by a model.
Understanding the interpretation of machine learning (ML) models has been of paramount importance when making decisions with societal impacts such as transport control, financial activities, and medical diagnosis. While current model interpretation methodologies focus on using locally linear functions to approximate the models or creating self-explanatory models that give explanations to each input instance, they do not focus on model interpretation at the subpopulation level, which is the understanding of model interpretations across different subset aggregations in a dataset. To address the challenges of providing explanations of an ML model across the whole dataset, we propose SUBPLEX, a visual analytics system to help users understand black-box model explanations with subpopulation visual analysis. SUBPLEX is designed through an iterative design process with machine learning researchers to address three usage scenarios of real-life machine learning tasks: model debugging, feature selection, and bias detection. The system applies novel subpopulation analysis on ML model explanations and interactive visualization to explore the explanations on a dataset with different levels of granularity. Based on the system, we conduct user evaluation to assess how understanding the interpretation at a subpopulation level influences the sense-making process of interpreting ML models from a users perspective. Our results suggest that by providing model explanations for different groups of data, SUBPLEX encourages users to generate more ingenious ideas to enrich the interpretations. It also helps users to acquire a tight integration between programming workflow and visual analytics workflow. Last but not least, we summarize the considerations observed in applying visualization to machine learning interpretations.