Many sequential decision-making tasks require choosing at each decision step the right action out of the vast set of possibilities by extracting actionable intelligence from high-dimensional data streams. Most of the times, the high-dimensionality of actions and data makes learning of the optimal actions by traditional learning methods impracticable. In this work, we investigate how to discover and leverage sparsity in actions and data to enable fast learning. As our learning model, we consider a structured contextual multi-armed bandit (CMAB) with high-dimensional arm (action) and context (data) sets, where the rewards depend only on a few relevant dimensions of the joint context-arm set, possibly in a non-linear way. We depart from the prior work by assuming a high-dimensional, continuum set of arms, and allow relevant context dimensions to vary for each arm. We propose a new online learning algorithm called {em CMAB with Relevance Learning} (CMAB-RL) and prove that its time-averaged regret asymptotically goes to zero when the expected reward varies smoothly in contexts and arms. CMAB-RL enjoys a substantially improved regret bound compared to classical CMAB algorithms whose regrets depend on dimensions $d_x$ and $d_a$ of the context and arm sets. Importantly, we show that when the learner has prior knowledge on sparsity, given in terms of upper bounds $overline{d}_x$ and $overline{d}_a$ on the number of relevant dimensions, then CMAB-RL achieves $tilde{O}(T^{1-1/(2+2overline{d}_x +overline{d}_a)})$ regret. Finally, we illustrate how CMAB algorithms can be used for optimal personalized blood glucose control in type 1 diabetes mellitus patients, and show that CMAB-RL outperforms other contextual MAB algorithms in this task.