Consider a multi-agent network comprised of risk averse social sensors and a controller that jointly seek to estimate an unknown state of nature, given noisy measurements. The network of social sensors perform Bayesian social learning - each sensor fuses the information revealed by previous social sensors along with its private valuation using Bayes rule - to optimize a local cost function. The controller sequentially modifies the cost function of the sensors by discriminatory pricing (control inputs) to realize long term global objectives. We formulate the stochastic control problem faced by the controller as a Partially Observed Markov Decision Process (POMDP) and derive structural results for the optimal control policy as a function of the risk-aversion factor in the Conditional Value-at-Risk (CVaR) cost function of the sensors. We show that the optimal price sequence when the sensors are risk- averse is a super-martingale; i.e, it decreases on average over time.