Randomized benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device. The stochastic error of the average gate fidelity estimated by RB depends on the sampling strategy (i.e., how to sample sequences to be run in the protocol). The sampling strategy is determined by a set of configurable parameters (an RB configuration) that includes Clifford lengths (a list of the number of independent Clifford gates in a sequence) and the number of sequences for each Clifford length. The RB configuration is often chosen heuristically and there has been little research on its best configuration. Therefore, we propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized while not increasing the total execution time of sequences. By experiments on real devices, we demonstrate the efficacy of the optimization method against heuristic selection in reducing the variance of the estimated fidelity.