Do you want to publish a course? Click here

thinking fast and slow,

التفكير السريع والبطيئ

1760   0   1669   0 ( 0 )
 Publication date 2011
and research's language is العربية
 Created by batman bat




Ask ChatGPT about the research

thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow,



References used
Daniel Kahneman thinking
rate research

Read More

We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require co nsiderable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster.
Recent work has adopted models of pragmatic reasoning for the generation of informative language in, e.g., image captioning. We propose a simple but highly effective relaxation of fully rational decoding, based on an existing incremental and characte r-level approach to pragmatically informative neural image captioning. We implement a mixed, fast' and slow', speaker that applies pragmatic reasoning occasionally (only word-initially), while unrolling the language model. In our evaluation, we find that increased informativeness through pragmatic decoding generally lowers quality and, somewhat counter-intuitively, increases repetitiveness in captions. Our mixed speaker, however, achieves a good balance between quality and informativeness.
With the widespread of new fast networks and need for critical application, survivability, reliability and quality of service became an sensational issue. Recovery mechanism used by IP network spent a lot of time from several seconds to minutes. Th is causes large drop in data packages. MPLS is a next generation backbone architecture, which can speed up packet forwarding to destination by label switching especially with its traffic engineering capability. MPLS recovery mechanisms are increasing in popularity because they can guarantee fast restoration and high QoS assurance. We simulated in our research several scenarios for link failure using fast reroute technology in MPLS network's using Opnet. Results lead us to consider this technique successful in limiting delay and packet drop in recovery cycle.
Human dialogue contains evolving concepts, and speakers naturally associate multiple concepts to compose a response. However, current dialogue models with the seq2seq framework lack the ability to effectively manage concept transitions and can hardly introduce multiple concepts to responses in a sequential decoding manner. To facilitate a controllable and coherent dialogue, in this work, we devise a concept-guided non-autoregressive model (CG-nAR) for open-domain dialogue generation. The proposed model comprises a multi-concept planning module that learns to identify multiple associated concepts from a concept graph and a customized Insertion Transformer that performs concept-guided non-autoregressive generation to complete a response. The experimental results on two public datasets show that CG-nAR can produce diverse and coherent responses, outperforming state-of-the-art baselines in both automatic and human evaluations with substantially faster inference speed.
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic corpus of deductively valid arguments, and generate artificial argumentative texts to train CRiPT: a critical th inking intermediarily pre-trained transformer based on GPT-2. Significant transfer learning effects can be observed: Trained on three simple core schemes, CRiPT accurately completes conclusions of different, and more complex types of arguments, too. CRiPT generalizes the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for NLU benchmarks. In particular, CRiPT's zero-shot accuracy on the GLUE diagnostics exceeds GPT-2's performance by 15 percentage points. The findings suggest that intermediary pre-training on texts that exemplify basic reasoning abilities (such as typically covered in critical thinking textbooks) might help language models to acquire a broad range of reasoning skills. The synthetic argumentative texts presented in this paper are a promising starting point for building such a critical thinking curriculum for language models.''
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا