ترغب بنشر مسار تعليمي؟ اضغط هنا

Open banking enables individual customers to own their banking data, which provides fundamental support for the boosting of a new ecosystem of data marketplaces and financial services. In the near future, it is foreseeable to have decentralized data ownership in the finance sector using federated learning. This is a just-in-time technology that can learn intelligent models in a decentralized training manner. The most attractive aspect of federated learning is its ability to decompose model training into a centralized server and distributed nodes without collecting private data. This kind of decomposed learning framework has great potential to protect users privacy and sensitive data. Therefore, federated learning combines naturally with an open banking data marketplaces. This chapter will discuss the possible challenges for applying federated learning in the context of open banking, and the corresponding solutions have been explored as well.
Encouraging progress has been made towards Visual Question Answering (VQA) in recent years, but it is still challenging to enable VQA models to adaptively generalize to out-of-distribution (OOD) samples. Intuitively, recompositions of existing visual concepts (i.e., attributes and objects) can generate unseen compositions in the training set, which will promote VQA models to generalize to OOD samples. In this paper, we formulate OOD generalization in VQA as a compositional generalization problem and propose a graph generative modeling-based training scheme (X-GGM) to handle the problem implicitly. X-GGM leverages graph generative modeling to iteratively generate a relation matrix and node representations for the predefined graph that utilizes attribute-object pairs as nodes. Furthermore, to alleviate the unstable training issue in graph generative modeling, we propose a gradient distribution consistency loss to constrain the data distribution with adversarial perturbations and the generated distribution. The baseline VQA model (LXMERT) trained with the X-GGM scheme achieves state-of-the-art OOD performance on two standard VQA OOD benchmarks, i.e., VQA-CP v2 and GQA-OOD. Extensive ablation studies demonstrate the effectiveness of X-GGM components.
Estimation of a precision matrix (i.e., inverse covariance matrix) is widely used to exploit conditional independence among continuous variables. The influence of abnormal observations is exacerbated in a high dimensional setting as the dimensionalit y increases. In this work, we propose robust estimation of the inverse covariance matrix based on an $l_1$ regularized objective function with a weighted sample covariance matrix. The robustness of the proposed objective function can be justified by a nonparametric technique of the integrated squared error criterion. To address the non-convexity of the objective function, we develop an efficient algorithm in a similar spirit of majorization-minimization. Asymptotic consistency of the proposed estimator is also established. The performance of the proposed method is compared with several existing approaches via numerical simulations. We further demonstrate the merits of the proposed method with application in genetic network inference.
Directional excitation of guidance modes is central to many applications ranging from light harvesting, optical information processing to quantum optical technology. Of paramount interest, especially, the active control of near-field directionality p rovides a new paradigm for the real-time on-chip manipulation of light. Here we find that for a given dipolar source, its near-field directionality can be toggled efficiently via tailoring the polarization of surface waves that are excited, for example, via tuning the chemical potential of graphene in a graphene-metasurface waveguide. This finding enables a feasible scheme for the active near-field directionality. Counterintuitively, we reveal that this scheme can transform a circular electric/magnetic dipole into a Huygens dipole in the near-field coupling. Moreover, for Janus dipoles, this scheme enables us to actively flip their near-field coupling and non-coupling faces.
110 - Gaole He , Yunshi Lan , Jing Jiang 2021
Multi-hop Knowledge Base Question Answering (KBQA) aims to find the answer entities that are multiple hops away in the Knowledge Base (KB) from the entities in the question. A major challenge is the lack of supervision signals at intermediate steps. Therefore, multi-hop KBQA algorithms can only receive the feedback from the final answer, which makes the learning unstable or ineffective. To address this challenge, we propose a novel teacher-student approach for the multi-hop KBQA task. In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network. The major novelty lies in the design of the teacher network, where we utilize both forward and backward reasoning to enhance the learning of intermediate entity distributions. By considering bidirectional reasoning, the teacher network can produce more reliable intermediate supervision signals, which can alleviate the issue of spurious reasoning. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our approach on the KBQA task. The code to reproduce our analysis is available at https://github.com/RichardHGL/WSDM2021_NSM.
Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws. In this work, we present a Cooperative Heterogeneous Deep Reinforcement Learning (CHDRL) framework that can learn a policy by integrating th e advantages of heterogeneous agents. Specifically, we propose a cooperative learning framework that classifies heterogeneous agents into two classes: global agents and local agents. Global agents are off-policy agents that can utilize experiences from the other agents. Local agents are either on-policy agents or population-based evolutionary algorithms (EAs) agents that can explore the local area effectively. We employ global agents, which are sample-efficient, to guide the learning of local agents so that local agents can benefit from sample-efficient agents and simultaneously maintain their advantages, e.g., stability. Global agents also benefit from effective local searches. Experimental studies on a range of continuous control tasks from the Mujoco benchmark show that CHDRL achieves better performance compared with state-of-the-art baselines.
There is strong interest among healthcare payers to identify emerging healthcare cost drivers to support early intervention. However, many challenges arise in analyzing large, high dimensional, and noisy healthcare data. In this paper, we propose a s ystematic approach that utilizes hierarchical search strategies and enhanced statistical process control (SPC) algorithms to surface high impact cost drivers. Our approach aims to provide interpretable, detailed, and actionable insights of detected change patterns attributing to multiple clinical factors. We also proposed an algorithm to identify comparable treatment offsets at the population level and quantify the cost impact on their utilization changes. To illustrate our approach, we apply it to the IBM Watson Health MarketScan Commercial Database and organized the detected emerging drivers into 5 categories for reporting. We also discuss some findings in this analysis and potential actions in mitigating the impact of the drivers.
Imbalanced learning (IL), i.e., learning unbiased models from class-imbalanced data, is a challenging problem. Typical IL methods including resampling and reweighting were designed based on some heuristic assumptions. They often suffer from unstable performance, poor applicability, and high computational cost in complex tasks where their assumptions do not hold. In this paper, we introduce a novel ensemble IL framework named MESA. It adaptively resamples the training set in iterations to get multiple classifiers and forms a cascade ensemble model. MESA directly learns the sampling strategy from data to optimize the final metric beyond following random heuristics. Moreover, unlike prevailing meta-learning-based IL solutions, we decouple the model-training and meta-training in MESA by independently train the meta-sampler over task-agnostic meta-data. This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks. Extensive experiments on both synthetic and real-world tasks demonstrate the effectiveness, robustness, and transferability of MESA. Our code is available at https://github.com/ZhiningLiu1998/mesa.
Crystal phase is well studied and presents a periodical atom arrangement in three dimensions lattice, but the amorphous phase is poorly understood. Here, by starting from cage-like bicyclocalix[2]arene[2]triazines building block, a brand-new 2D MOF i s constructed with extremely weak interlaminar interaction existing between two adjacent 2D-crystal layer. Inter-layer slip happens under external disturbance and leads to the loss of periodicity at one dimension in the crystal lattice, resulting in an interim phase between the crystal and amorphous phase - the chaos phase, non-periodical in microscopic scale but orderly in mesoscopic scale. This chaos phase 2D MOF is a disordered self-assembly of black-phosphorus like 3D-layer, which has excellent mechanical-strength and a thickness of 1.15 nm. The bulky 2D-MOF material is readily to be exfoliated into monolayer nanosheets in gram-scale with unprecedented evenness and homogeneity, as well as previously unattained lateral size (>10 um), which present the first mass-producible monolayer 2D material and can form wafer-scale film on substrate.
There is strong interest among payers to identify emerging healthcare cost drivers to support early intervention. However, many challenges arise in analyzing large, high dimensional, and noisy healthcare data. In this paper, we propose a systematic a pproach that utilizes hierarchical and multi-resolution search strategies using enhanced statistical process control (SPC) algorithms to surface high impact cost drivers. Our approach aims to provide interpretable, detailed, and actionable insights of detected change patterns attributing to multiple demographic and clinical factors. We also proposed an algorithm to identify comparable treatment offsets at the population level and quantify the cost impact on their utilization changes.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا