In recent years, face biometric security systems are rapidly increasing, therefore, the presentation attack detection (PAD) has received significant attention from research communities and has become a major field of research. Researchers have tackled the problem with various methods, from exploiting conventional texture feature extraction such as LBP, BSIF, and LPQ to using deep neural networks with different architectures. Despite the results each of these techniques has achieved for a certain attack scenario or dataset, most of them still failed to generalized the problem for unseen conditions, as the efficiency of each is limited to certain type of presentation attacks and instruments (PAI). In this paper, instead of completely extracting hand-crafted texture features or relying only on deep neural networks, we address the problem via fusing both wide and deep features in a unified neural architecture. The main idea is to take advantage of the strength of both methods to derive well-generalized solution for the problem. We also evaluated the effectiveness of our method by comparing the results with each of the mentioned techniques separately. The procedure is done on different spoofing datasets such as ROSE-Youtu, SiW and NUAA Imposter datasets. In particular, we simultanously learn a low dimensional latent space empowered with data-driven features learnt via Convolutional Neural Network designes for spoofing detection task (i.e., deep channel) as well as leverages spoofing detection feature already popular for spoofing in frequency and temporal dimensions ( i.e., via wide channel).