We propose a sigmoidal approximation for the value-at-risk (that we call SigVaR) and we use this approximation to tackle nonlinear programs (NLPs) with chance constraints. We prove that the approximation is conservative and that the level of conservatism can be made arbitrarily small for limiting parameter values. The SigVar approximation brings scalability benefits over exact mixed-integer reformulations because its sample average approximation can be cast as a standard NLP. We also establish explicit connections between SigVaR and other smooth sigmoidal approximations recently reported in the literature. We show that a key benefit of SigVaR over such approximations is that one can establish an explicit connection with the conditional value at risk (CVaR) approximation and exploit this connection to obtain initial guesses for the approximation parameters. We present small- and large-scale numerical studies to illustrate the developments.