We study a problem of privacy-preserving mechanism design. A data collector wants to obtain data from individuals to perform some computations. To relieve the privacy threat to the contributors, the data collector adopts a privacy-preserving mechanism by adding random noise to the computation result, at the cost of reduced accuracy. Individuals decide whether to contribute data when faced with the privacy issue. Due to the intrinsic uncertainty in privacy protection, we model individuals privacy-related decision using Prospect Theory. Such a theory more accurately models individuals behavior under uncertainty than the traditional expected utility theory, whose prediction always deviates from practical human behavior. We show that the data collectors utility maximization problem involves a polynomial of high and fractional order, the root of which is difficult to compute analytically. We get around this issue by considering a large population approximation, and obtain a closed-form solution that well approximates the precise solution. We discover that the data collector who considers the more realistic Prospect Theory based individual decision modeling would adopt a more conservative privacy-preserving mechanism, compared with the case based on the expected utility theory modeling. We also study the impact of Prospect Theory parameters, and concludes that more loss-averse or risk-seeking individuals will trigger a more conservative mechanism. When individuals have different Prospect Theory parameters, simulations demonstrate that the privacy protection first becomes stronger and then becomes weaker as the heterogeneity increases from a low value to a high one.