Teaching assistants (TAs) are often responsible for grading in introductory physics courses at large research universities. Their grading practices can shape students approaches to problem solving and learning. Physics education research recommends grading practices that encourage students to provide evidence of understanding via explication of the problem-solving process. However, TAs may not necessarily grade in a manner that encourages students to provide evidence of understanding in their solutions. Within the context of a semester-long TA professional development course, we investigated whether encouraging TAs to use a grading rubric that appropriately weights the problem-solving process and having them reflect upon the benefits of using such a rubric prompts TAs to require evidence of understanding in student solutions. We examined how the TAs graded realistic student solutions to introductory physics problems before they were provided a rubric, whether TAs used the rubric as intended, whether they were consistent in grading similar solutions, and how TAs grading criteria changed after discussing the benefits of a well-designed rubric. We find that many TAs typically applied the rubric consistently when grading similar student solutions but did not require students to provide evidence of understanding. TAs written responses, class discussions, and individual interviews suggest that the instructional activities involving the grading rubrics in this study were not sufficient to change their grading practices. Interviews and class discussions suggest that helping TAs value a rubric that appropriately weights the problem-solving process may be challenging partly due to the TAs past educational experiences and the departmental context.