ﻻ يوجد ملخص باللغة العربية
In language learning in the limit we investigate computable devices (learners) learning formal languages. Through the years, many natural restrictions have been imposed on the studied learners. As such, monotonic restrictions always enjoyed particular attention as, although being a natural requirement, monotonic learners show significantly diverse behaviour when studied in different settings. A recent study thoroughly analysed the learning capabilities of strongly monotone learners imposed with memory restrictions and various additional requirements. The unveiled differences between explanatory and behaviourally correct such learners motivate our studies of monotone learners dealing with the same restrictions. We reveal differences and similarities between monotone learners and their strongly monotone counterpart when studied with various additional restrictions. In particular, we show that explanatory monotone learners, although known to be strictly stronger, do (almost) preserve the pairwise relation as seen in strongly monotone learning. Contrasting this similarity, we find substantial differences when studying behaviourally correct monotone learners. Most notably, we show that monotone learners, as opposed to their strongly monotone counterpart, do heavily rely on the order the information is given in, an unusual result for behaviourally correct learners.
We study learners (computable devices) inferring formal languages, a setting referred to as language learning in the limit or inductive inference. In particular, we require the learners we investigate to be witness-based, that is, to justify each of
We propose orthogonal inductive matrix completion (OMIC), an interpretable approach to matrix completion based on a sum of multiple orthonormal side information terms, together with nuclear-norm regularization. The approach allows us to inject prio
Many deep reinforcement learning algorithms contain inductive biases that sculpt the agents objective and its interface to the environment. These inductive biases can take many forms, including domain knowledge and pretuned hyper-parameters. In gener
Autonomous systems embedded with machine learning modules often rely on deep neural networks for classifying different objects of interest in the environment or different actions or strategies to take for the system. Due to the non-linearity and high
Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective. This Monotonic Linear Interpolation (MLI)