The convex analytic method (generalized by Borkar) has proved to be a very versatile method for the study of infinite horizon average cost optimal stochastic control problems. In this paper, we revisit the convex analytic method and make three primary contributions: (i) We present an existence result, under a near-monotone cost hypothesis, for controlled Markov models that lack weak continuity of the transition kernel but are strongly continuous in the action variable for every fixed state variable. (ii) For average cost stochastic control problems in standard Borel spaces, while existing results establish the optimality of stationary (possibly randomized) policies, few results are available on the optimality of stationary deterministic policies, and these are under rather restrictive hypotheses. We provide mild conditions under which an average cost optimal stochastic control problem admits optimal solutions that are deterministic and stationary, building upon a study of strategic measures by Feinberg. (iii) We establish conditions under which the performance under stationary deterministic policies is dense in the set of performance values under randomized stationary policies.