With the advent of interferometric instruments with 4 telescopes at the VLTI and 6 telescopes at CHARA, the scientific possibility arose to routinely obtain milli-arcsecond scale images of the observed targets. Such an image reconstruction process is typically performed in a Bayesian framework where the function to minimize is made of two terms: the datalikelihood and the Bayesian prior. This prior should be based on our prior knowledge of the observed source. Up to now,this prior was chosen from a set of generic and arbitrary functions, such as total variation for example. Here, we present an image reconstruction framework using generative adversarial networks where the Bayesian prior is defined using state-of-the-art radiative transfer models of the targeted objects. We validate this new image reconstruction algorithm on synthetic data with added noise. The generated images display a drastic reduction of artefacts and allow a more straight forward astrophysical interpretation. The results can be seen as a first illustration of how neural networks can provide significant improvements to the image reconstruction of a variety of astrophysical sources.