Steal time is a key performance metric for applications executed in a virtualized environment. Steal time measures the amount of time the processor is preempted by code outside the virtualized environment. This, in turn, allows to compute accurately the execution time of an application inside a virtual machine (i.e. it eliminates the time the virtual machine is suspended). Unfortunately, this metric is only available in particular scenarios in which the host and the guest OS are tightly coupled. Typical examples are the Xen hypervisor and Linux-based guest OSes. In contrast, in scenarios where the steal time is not available inside the virtualized environment, performance measurements are, most often, incorrect. In this paper, we introduce a novel and platform agnostic approach to calculate this steal time within the virtualized environment and without the cooperation of the host OS. The theoretical execution time of a deterministic microbenchmark is compared to its execution time in a virtualized environment. When factoring in the virtual machine load, this solution -as simple as it is- can compute the steal time. The preliminary results show that we are able to compute the load of the physical processor within the virtual machine with high accuracy.