Blending representation learning approaches with simultaneous localization and mapping (SLAM) systems is an open question, because of their highly modular and complex nature. Functionally, SLAM is an operation that transforms raw sensor inputs into a distribution over the state(s) of the robot and the environment. If this transformation (SLAM) were expressible as a differentiable function, we could leverage task-based error signals to learn representations that optimize task performance. However, several components of a typical dense SLAM system are non-differentiable. In this work, we propose gradSLAM, a methodology for posing SLAM systems as differentiable computational graphs, which unifies gradient-based learning and SLAM. We propose differentiable trust-region optimizers, surface measurement and fusion schemes, and raycasting, without sacrificing accuracy. This amalgamation of dense SLAM with computational graphs enables us to backprop all the way from 3D maps to 2D pixels, opening up new possibilities in gradient-based learning for SLAM. TL;DR: We leverage the power of automatic differentiation frameworks to make dense SLAM differentiable.