ترغب بنشر مسار تعليمي؟ اضغط هنا

Motivated by scenarios where data is used for diverse prediction tasks, we study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously. We consider seven group fairness notion s that cover the concepts of independence, separation, and calibration. Against the backdrop of the fairness impossibility results, we explore approximate fairness. We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks -- the tasks for which the representation is discriminative. Specifically, all seven group fairness notions are linearly controlled by fairness and discriminativeness of the representation. When an incompatibility exists between different fairness notions, fair and discriminative representation hits the sweet spot that approximately satisfies all notions. Motivated by our theoretical findings, we propose to learn both fair and discriminative representations using pretext loss which self-supervises learning, and Maximum Mean Discrepancy as a fair regularizer. Experiments on tabular, image, and face datasets show that using the learned representation, downstream predictions that we are unaware of when learning the representation indeed become fairer for seven group fairness notions, and the fairness guarantees computed from our theoretical results are all valid.
Recently, the notions of subjective constraint monotonicity, epistemic splitting, and foundedness have been introduced for epistemic logic programs, with the aim to use them as main criteria respectively intuitions to compare different answer set sem antics proposed in the literature on how they comply with these intuitions. In this note, we consider these three notions and demonstrate on some examples that they may be too strong in general and may exclude some desired answer sets respectively world views. In conclusion, these properties should not be regarded as mandatory properties that every answer set semantics must satisfy in general.
Physically vitrifying single-element metallic glass requires ultrahigh cooling rates, which are still unachievable for most of the closest-packed metals. Here, we report a facile synthetic strategy for creating mono-atomic palladium metallic glass na noparticles with a purity of 99.35 +/- 0.23 at% from palladium-silicon liquid droplets using a cooling rate below 1000 K/s. In-situ environmental transmission electron microscopy directly detected the leaching of silicon. Further hydrogen absorption experiment showed that this palladium metallic glass expanded little upon hydrogen uptake, exhibiting a great potential application for hydrogen separation. Our results provide insight into the formation of mono-atomic metallic glass at nanoscale.
Style variation has been a major challenge for person re-identification, which aims to match the same pedestrians across different cameras. Existing works attempted to address this problem with camera-invariant descriptor subspace learning. However, there will be more image artifacts when the difference between the images taken by different cameras is larger. To solve this problem, we propose a UnityStyle adaption method, which can smooth the style disparities within the same camera and across different cameras. Specifically, we firstly create UnityGAN to learn the style changes between cameras, producing shape-stable style-unity images for each camera, which is called UnityStyle images. Meanwhile, we use UnityStyle images to eliminate style differences between different images, which makes a better match between query and gallery. Then, we apply the proposed method to Re-ID models, expecting to obtain more style-robust depth features for querying. We conduct extensive experiments on widely used benchmark datasets to evaluate the performance of the proposed framework, the results of which confirm the superiority of the proposed model.
We report high-resolution neutron scattering measurements of the low energy spin fluctuations of KFe$_{2}$As$_{2}$, the end member of the hole-doped Ba$_{1-x}$K$_x$Fe$_2$As$_2$ family with only hole pockets, above and below its superconducting transi tion temperature $T_c$ ($sim$ 3.5 K). Our data reveals clear spin fluctuations at the incommensurate wave vector ($0.5pmdelta$, 0, $L$), ($delta$ = 0.2)(1-Fe unit cell), which exhibit $L$-modulation peaking at $L=0.5$. Upon cooling to the superconducting state, the incommensurate spin fluctuations gradually open a spin-gap and form a sharp spin resonance mode. The incommensurability ($2delta$ = 0.4) of the resonance mode ($sim1.2$ meV) is considerably larger than the previously reported value ($2delta$ $approx0.32$) at higher energies ($gesim6$ meV). The determination of the momentum structure of spin fluctuation in the low energy limit allows a direct comparison with the realistic Fermi surface and superconducting gap structure. Our results point to an $s$-wave pairing with a reversed sign between the hole pockets near the zone center in KFe$_{2}$As$_{2}$.
In end-to-end dialogue modeling and agent learning, it is important to (1) effectively learn knowledge from data, and (2) fully utilize heterogeneous information, e.g., dialogue act flow and utterances. However, the majority of existing methods canno t simultaneously satisfy the two conditions. For example, rule definition and data labeling during system design take too much manual work, and sequence-to-sequence methods only model one-side utterance information. In this paper, we propose a novel joint end-to-end model by multi-task representation learning, which can capture the knowledge from heterogeneous information through automatically learning knowledgeable low-dimensional embeddings from data, named with DialogAct2Vec. The model requires little manual work for intervention in system design and we find that the multi-task learning can greatly improve the effectiveness of representation learning. Extensive experiments on a public dataset for restaurant reservation show that the proposed method leads to significant improvements against the state-of-the-art baselines on both the act prediction task and utterance prediction task.
Precision medicine is becoming a focus in medical research recently, as its implementation brings values to all stakeholders in the healthcare system. Various statistical methodologies have been developed tackling problems in different aspects of thi s field, e.g., assessing treatment heterogeneity, identifying patient subgroups, or building treatment decision models. However, there is a lack of new tools devoted to selecting and prioritizing predictive biomarkers. We propose a novel tree-based ensemble method, random interaction forest (RIF), to generate predictive importance scores and prioritize candidate biomarkers for constructing refined treatment decision models. RIF was evaluated by comparing with the conventional random forest and univariable regression methods and showed favorable properties under various simulation scenarios. We applied the proposed RIF method to a biomarker dataset from two phase III clinical trials of bezlotoxumab on $textit{Clostridium difficile}$ infection recurrence and obtained biologically meaningful results.
Model fine-tuning is a widely used transfer learning approach in person Re-identification (ReID) applications, which fine-tuning a pre-trained feature extraction model into the target scenario instead of training a model from scratch. It is challengi ng due to the significant variations inside the target scenario, e.g., different camera viewpoint, illumination changes, and occlusion. These variations result in a gap between the distribution of each mini-batch and the whole datasets distribution when using mini-batch training. In this paper, we study model fine-tuning from the perspective of the aggregation and utilization of the global information of the dataset when using mini-batch training. Specifically, we introduce a novel network structure called Batch-related Convolutional Cell (BConv-Cell), which progressively collects the global information of the dataset into a latent state and uses it to rectify the extracted feature. Based on BConv-Cells, we further proposed the Progressive Transfer Learning (PTL) method to facilitate the model fine-tuning process by jointly optimizing the BConv-Cells and the pre-trained ReID model. Empirical experiments show that our proposal can improve the performance of the ReID model greatly on MSMT17, Market-1501, CUHK03 and DukeMTMC-reID datasets. Moreover, we extend our proposal to the general image classification task. The experiments in several image classification benchmark datasets demonstrate that our proposal can significantly improve the performance of baseline models. The code has been released at url{https://github.com/ZJULearning/PTL}
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا