In this paper we present a study on the time cost
added to the grid computing as a result of the use of a
coordinated checkpoint / recovery fault tolerance protocol, we aim
to find a mathematical model which determined the suitable time
to save t
he checkpoints for application, to achieve a minimum
finish time of parallel application in grid computing with faults and
fault tolerance protocols, we have find this model by serial
modeling to the goal errors, execution environment and the
chosen fault tolerance protocol all that by Kolmogorov differential
equations.