We investigate the performance of different methodologies that measure the time lag between broad-line and continuum variations in reverberation mapping data using simulated light curves that probe a range of cadence, time baseline, and signal-to-noise ratio in the flux measurements. We compare three widely-adopted lag measuring methods: the Interpolated Cross-Correlation Function (ICCF), the z-transformed Discrete Correlation Function (ZDCF) and the MCMC code JAVELIN, for mock data with qualities typical of multi-object spectroscopic reverberation mapping (MOS-RM) surveys that simultaneously monitor hundreds of quasars. We quantify the overall lag detection efficiency, the rate of false detections, and the quality of lag measurements for each of these methods and under different survey designs (e.g., observing cadence and depth) using mock quasar light curves. Overall JAVELIN and ICCF outperform ZDCF in essentially all tests performed. Compared with ICCF, JAVELIN produces higher quality lag measurements, is capable of measuring more lags with timescales shorter than the observing cadence, is less susceptible to seasonal gaps and S/N degradation in the light curves, and produces more accurate lag uncertainties. We measure the Hbeta broad-line region size-luminosity (R-L) relation with each method using the simulated light curves to assess the impact of selection effects of the design of MOS-RM surveys. The slope of the R-L relation measured by JAVELIN is the least biased among the three methods, and is consistent across different survey designs. These results demonstrate a clear preference for JAVELIN over the other two non-parametric methods for MOS-RM programs, particularly in the regime of limited light curve quality as expected from most MOS-RM programs.