We investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time. We consider minimization of an expected value cost over a finite horizon. Hard constraints are introduced first, and then reformulated in terms of probabilistic constraints. It is shown that, for a suitable parametrization of the control policy, a wide class of the resulting optimization problems are convex, or admit reasonable convex approximations.