Theoretical stellar spectra rely on model stellar atmospheres computed based on our understanding of the physical laws at play in the stellar interiors. These models, coupled with atomic and molecular line databases, are used to generate theoretical stellar spectral libraries (SSLs) comprising of stellar spectra over a regular grid of atmospheric parameters (temperature, surface gravity, abundances) at any desired resolution. Another class of SSLs is referred to as empirical spectral libraries; these contain observed spectra at limited resolution. SSLs play an essential role in deriving the properties of stars and stellar populations. Both theoretical and empirical libraries suffer from limited coverage over the parameter space. This limitation is overcome to some extent by generating spectra for specific sets of atmospheric parameters by interpolating within the grid of available parameter space. In this work, we present a method for spectral interpolation in the optical region using machine learning algorithms that are generic, easily adaptable for any SSL without much change in the model parameters, and computationally inexpensive. We use two machine learning techniques, Random Forest (RF) and Artificial Neural Networks (ANN), and train the models on the MILES library. We apply the trained models to spectra from the CFLIB for testing and show that the performance of the two models is comparable. We show that both the models achieve better accuracy than the existing methods of polynomial based interpolation and the Gaussian radial basis function (RBF) interpolation.