Bayesian approaches are appealing for constrained inference problems in allowing a probabilistic characterization of uncertainty, while providing a computational machinery for incorporating complex constraints in hierarchical models. However, the usual Bayesian strategy of placing a prior on the constrained space and conducting posterior computation with Markov chain Monte Carlo algorithms is often intractable. An alternative is to conduct inference for a less constrained posterior and project samples to the constrained space through a minimal distance mapping. We formalize and provide a unifying framework for such posterior projections. For theoretical tractability, we initially focus on constrained parameter spaces corresponding to closed and convex subsets of the original space. We then consider non-convex Stiefel manifolds. We provide a general formulation of the projected posterior and show that it can be viewed as an update of a data-dependent prior with the likelihood for particular classes of priors and likelihood functions. We also show that asymptotic properties of the unconstrained posterior are transferred to the projected posterior. Posterior projections are illustrated through multiple examples, both in simulation studies and real data applications.