We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.