Platforms critically rely on rating systems to learn the quality of market participants. In practice, however, these ratings are often highly inflated, and therefore not very informative. In this paper, we first investigate whether the platform can obtain less inflated, more informative ratings by altering the meaning and relative importance of the levels in the rating system. Second, we seek a principled approach for the platform to make these choices in the design of the rating system. First, we analyze the results of a randomized controlled trial on an online labor market in which an additional question was added to the feedback form. Between treatment conditions, we vary the question phrasing and answer choices; in particular, the treatment conditions include several positive-skewed verbal rating scales with descriptive phrases or adjectives providing specific interpretation for each rating level. The online labor market test reveals that current inflationary norms can in fact be countered by re-anchoring the meaning of the levels of the rating system. In particular, the positive-skewed verbal rating scales yield rating distributions that significantly reduce rating inflation and are much more informative about seller quality. Second, we develop a model-based framework to compare and select among rating system designs, and apply this framework to the data obtained from the online labor market test. Our simulations demonstrate that our model-based framework for scale design and optimization can identify the most informative rating system and substantially improve the quality of information obtained over baseline designs. Overall, our study illustrates that rating systems that are informative in practice can be designed, and demonstrates how to design them in a principled manner.