ترغب بنشر مسار تعليمي؟ اضغط هنا

Transfer learning has become a common practice for training deep learning models with limited labeled data in a target domain. On the other hand, deep models are vulnerable to adversarial attacks. Though transfer learning has been widely applied, its effect on model robustness is unclear. To figure out this problem, we conduct extensive empirical evaluations to show that fine-tuning effectively enhances model robustness under white-box FGSM attacks. We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model. To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model. Empirical results show that the adversarial examples are more transferable when fine-tuning is used than they are when the two networks are trained independently.
Deep domain adaptation models learn a neural network in an unlabeled target domain by leveraging the knowledge from a labeled source domain. This can be achieved by learning a domain-invariant feature space. Though the learned representations are sep arable in the source domain, they usually have a large variance and samples with different class labels tend to overlap in the target domain, which yields suboptimal adaptation performance. To fill the gap, a Fisher loss is proposed to learn discriminative representations which are within-class compact and between-class separable. Experimental results on two benchmark datasets show that the Fisher loss is a general and effective loss for deep domain adaptation. Noticeable improvements are brought when it is used together with widely adopted transfer criteria, including MMD, CORAL and domain adversarial loss. For example, an absolute improvement of 6.67% in terms of the mean accuracy is attained when the Fisher loss is used together with the domain adversarial loss on the Office-Home dataset.
Parameters in deep neural networks which are trained on large-scale databases can generalize across multiple domains, which is referred as transferability. Unfortunately, the transferability is usually defined as discrete states and it differs with d omains and network architectures. Existing works usually heuristically apply parameter-sharing or fine-tuning, and there is no principled approach to learn a parameter transfer strategy. To address the gap, a parameter transfer unit (PTU) is proposed in this paper. The PTU learns a fine-grained nonlinear combination of activations from both the source and the target domain networks, and subsumes hand-crafted discrete transfer states. In the PTU, the transferability is controlled by two gates which are artificial neurons and can be learned from data. The PTU is a general and flexible module which can be used in both CNNs and RNNs. Experiments are conducted with various network architectures and multiple transfer domain pairs. Results demonstrate the effectiveness of the PTU as it outperforms heuristic parameter-sharing and fine-tuning in most settings.
302 - Ruipu Bai , Yinghua Zhang 2016
In this paper we study $k$-order homogeneous Rota-Baxter operators with weight $1$ on the simple $3$-Lie algebra $A_{omega}$ (over a field of characteristic zero), which is realized by an associative commutative algebra $A$ and a derivation $Delta$ a nd an involution $omega$ (Lemma mref{lem:rbd3}). A $k$-order homogeneous Rota-Baxter operator on $A_{omega}$ is a linear map $R$ satisfying $R(L_m)=f(m+k)L_{m+k}$ for all generators ${ L_m~ |~ min mathbb Z }$ of $A_{omega}$ and a map $f : mathbb Z rightarrowmathbb F$, where $kin mathbb Z$. We prove that $R$ is a $k$-order homogeneous Rota-Baxter operator on $A_{omega}$ of weight $1$ with $k eq 0$ if and only if $R=0$ (see Theorems 3.2, and $R$ is a $0$-order homogeneous Rota-Baxter operator on $A_{omega}$ of weight $1$ if and only if $R$ is one of the forty possibilities which are described in Theorems3.5, 3.7, 3.9, 3.10, 3.18, 3.21 and 3.22.
105 - Ruipu Bai , Yinghua Zhang 2015
In the paper we study homogeneous Rota-Baxter operators with weight zero on the infinite dimensional simple $3$-Lie algebra $A_{omega}$ over a field $F$ ( $ch F=0$ ) which is realized by an associative commutative algebra $A$ and a derivation $Delta$ and an involution $omega$ ( Lemma mref{lem:rbd3} ). A homogeneous Rota-Baxter operator on $A_{omega}$ is a linear map $R$ of $A_{omega}$ satisfying $R(L_m)=f(m)L_m$ for all generators of $A_{omega}$, where $f : A_{omega} rightarrow F$. We proved that $R$ is a homogeneous Rota-Baxter operator on $A_{omega}$ if and only if $R$ is the one of the five possibilities $R_{0_1}$, $R_{0_2}$,$R_{0_3}$,$R_{0_4}$ and $R_{0_5}$, which are described in Theorem mref{thm:thm1}, mref{thm:thm4}, mref{thm:thm01}, mref{thm:thm03} and mref{thm:thm04}. By the five homogeneous Rota-Baxter operators $R_{0_i}$, we construct new $3$-Lie algebras $(A, [ , , ]_i)$ for $1leq ileq 5$, such that $R_{0_i}$ is the homogeneous Rota-Baxter operator on $3$-Lie algebra $(A, [ , , ]_i)$, respectively.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا