The use of blackbox solvers inside neural networks is a relatively new area which aims to improve neural network performance by including proven, efficient solvers for complex problems. Existing work has created methods for learning networks with these solvers as components while treating them as a blackbox. This work attempts to improve upon existing techniques by optimizing not only over the primary loss function, but also over the performance of the solver itself by using Time-cost Regularization. Additionally, we propose a method to learn blackbox parameters such as which blackbox solver to use or the heuristic function for a particular solver. We do this by introducing the idea of a hyper-blackbox which is a blackbox around one or more internal blackboxes.