This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing, the reliability-based method, and the area metric-based method can account for the existence of directional bias, where the mean predictions of a numerical model may be consistently below or above the corresponding experimental observations. It is also found that under some specific conditions, the Bayes factor metric in Bayesian equality hypothesis testing and the reliability-based metric can both be mathematically related to the p-value metric in classical hypothesis testing. Numerical studies are conducted to apply the above validation methods to gas damping prediction for radio frequency (RF) microelectromechanical system (MEMS) switches. The model of interest is a general polynomial chaos (gPC) surrogate model constructed based on expensive runs of a physics-based simulation model, and validation data are collected from fully characterized experiments.