This paper proposes a general framework for constructing feedback controllers that drive complex dynamical systems to efficient steady-state (or slowly varying) operating points. Efficiency is encoded using generalized equations which can model a broad spectrum of useful objectives, such as optimality or equilibria (e.g. Nash, Wardrop, etc.) in noncooperative games. The core idea of the proposed approach is to directly implement iterative solution (or equilibrium seeking) algorithms in closed loop with physical systems. Sufficient conditions for closed-loop stability and robustness are derived; these also serve as the first closed-loop stability results for sampled-data feedback-based optimization. Numerical simulations of smart building automation and game-theoretic robotic swarm coordination support the theoretical results.