We study the problem of generating counterfactual text for a classifier as a means for understanding and debugging classification. Given a textual input and a classification model, we aim to minimally alter the text to change the model's prediction.
White-box approaches have been successfully applied to similar problems in vision where one can directly optimize the continuous input. Optimization-based approaches become difficult in the language domain due to the discrete nature of text. We bypass this issue by directly optimizing in the latent space and leveraging a language model to generate candidate modifications from optimized latent representations. We additionally use Shapley values to estimate the combinatoric effect of multiple changes. We then use these estimates to guide a beam search for the final counterfactual text. We achieve favorable performance compared to recent white-box and black-box baselines using human and automatic evaluations. Ablation studies show that both latent optimization and the use of Shapley values improve success rate and the quality of the generated counterfactuals.
Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. Existing works usually formulate the method as a zero-sum game, which is solved by alternating g
radient descent/ascent algorithms. Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. To address this issue, we propose Stackelberg Adversarial Regularization (SALT), which formulates adversarial regularization as a Stackelberg game. This formulation induces a competition between a leader and a follower, where the follower generates perturbations, and the leader trains the model subject to the perturbations. Different from conventional approaches, in SALT, the leader is in an advantageous position. When the leader moves, it recognizes the strategy of the follower and takes the anticipated follower's outcomes into consideration. Such a leader's advantage enables us to improve the model fitting to the unperturbed data. The leader's strategic information is captured by the Stackelberg gradient, which is obtained using an unrolling algorithm. Our experimental results on a set of machine translation and natural language understanding tasks show that SALT outperforms existing adversarial regularization baselines across all tasks. Our code is publicly available.
Neural language models are known to have a high capacity for memorization of training samples. This may have serious privacy im- plications when training models on user content such as email correspondence. Differential privacy (DP), a popular choice
to train models with privacy guarantees, comes with significant costs in terms of utility degradation and disparate impact on subgroups of users. In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a novel triplet-loss term. We compare our methods with DP through extensive evaluation. We show the advantages of our regularizers with favorable utility-privacy trade-off, faster training with the ability to tap into existing optimization approaches, and ensuring uniform treatment of under-represented subgroups.
Corner cubes are one of the most important optical tools used in new optical devices and optical LIDAR. This paper compares
two different designs of the hollow and solid tetrahedral corner cubes and determines the relation between the Retroreflectio
n
index and the surface quality N and the surface flatness N and its effect on the focal length of the hollow and solid corner
cube.
Nonlinear conjugate gradient (CG) method holds an important role in solving large-scale unconstrained optimization problems. In this paper, we suggest a new modification of CG coefficient �� that satisfies sufficient descent condition and possesses global convergence property under strong Wolfe line search. The numerical results show that our new method is more efficient compared with other CG formulas tested.
في المشكلة التي نعالجها, تحتاج شركة اتصالات إلى بناء مجموعة من الأبراج الخلوية لتوفير خدمة الاتصالات الخليوية للسكان في منطقة جغرافية. تم تحديد عدد من المواقع المحتملة لبناء الأبراج. يعتم اختيار هذه المواقع على عدة عوامل ، بما في ذلك مدى اتساق البرج
مع البيئة المحيطة وارتفاع التضاريس, تتمتع الأبراج بمدى تغطية ثابت ، وبسبب قيود الميزانية ، لا يمكن بناء سوى عدد محدود منها . بالنظر إلى هذه القيود ، ترغب الشركة في توفير تغطية لأكبر قدر ممكن من السكان, والهدف هو اختيار في أي من المواقع المحتملة يجب أن تقوم الشركة ببناء الأبراج.
إن المشكلة التي شرحناها يمكن نمذجتها لتصبح أحد أمثلة مشكلة 0/1 knapsack الشهيرة لذلك شرحنا في الحلقة مفهوم مشكلة 0/1 Knapsack والطرق المستخدمة في الحل, وتوسعنا في الشرح عن خوارزمية Branch and Bound كونها تعتبر أفضلها.
In this paper, it has
merged two techniques of the artificial intelligent, they are the
ants colony optimization algorithm and the genetic algorithm, to
The recurrent reinforcement learning trading system
optimization. The proposed trading system
is based on an ant
colony optimization algorithm and the genetic algorithm to
select an optimal group of technical indicators, and fundamental
indicators.
Conjugate gradient algorithms are important for solving unconstrained optimization
problems, so that we present in this paper conjugate gradient algorithm depending on
improving conjugate coefficient achieving sufficient descent condition and globa
l
convergence by doing hybrid between the two conjugate coefficients [1] and
[2]. Numerical results show the efficiency of the suggested algorithm after its
application on several standard problems and comparing it with other conjugate gradient
algorithms according to number of iterations, function value and norm of gradient vector.
This research investigates the behavior of RC frames
strengthened using steel jacket technique and the impact of using
this technique on the frame specifications was examined in terms of
rigidity, ductility and resistance.
Multi-objective evolutionary algorithms are used in a wide range
of fields to solve the issues of optimization, which require several
conflicting objectives to be considered together. Basic evolutionary
algorithm algorithms have several drawbacks,
such as lack of a
good criterion for termination, and lack of evidence of good
convergence. A multi-objective hybrid evolutionary algorithm is
often used to overcome these defects.
optimization
الأمثلة
الأمثلة متعددة الأهداف
الخوارزميات التطورية
الخوارزميات التطورية المتعددة الأهداف
الخوارزميات التطورية عديدة الأهداف
(Multi-Objective Optimization (MO
Evolutionary Algorithms
(Multi-Objective Evolutionary Algorithms (MOEAs
(Many-Objective Evolutionary Algorithms (MaOEAs
المزيد..