No Arabic abstract
An autonomous adaptive MPC architecture is presented for control of heating, ventilation and air condition (HVAC) systems to maintain indoor temperature while reducing energy use. Although equipment use and occupant changes with time, existing MPC methods are not capable of automatically relearning models and computing control decisions reliably for extended periods without intervention from a human expert. We seek to address this weakness. Two major features are embedded in the proposed architecture to enable autonomy: (i) a system identification algorithm from our prior work that periodically re-learns building dynamics and unmeasured internal heat loads from data without requiring re-tuning by experts. The estimated model is guaranteed to be stable and has desirable physical properties irrespective of the data; (ii) an MPC planner with a convex approximation of the original nonconvex problem. The planner uses a descent and convergent method, with the underlying optimization problem being feasible and convex. A year long simulation with a realistic plant shows that both of the features of the proposed architecture - periodic model and disturbance update and convexification of the planning problem - are essential to get the performance improvement over a commonly used baseline controller. Without these features, though MPC can outperform the baseline controller in certain situations, the benefits may not be substantial enough to warrant the investment in MPC.
The design of building heating, ventilation, and air conditioning (HVAC) system is critically important, as it accounts for around half of building energy consumption and directly affects occupant comfort, productivity, and health. Traditional HVAC control methods are typically based on creating explicit physical models for building thermal dynamics, which often require significant effort to develop and are difficult to achieve sufficient accuracy and efficiency for runtime building control and scalability for field implementations. Recently, deep reinforcement learning (DRL) has emerged as a promising data-driven method that provides good control performance without analyzing physical models at runtime. However, a major challenge to DRL (and many other data-driven learning methods) is the long training time it takes to reach the desired performance. In this work, we present a novel transfer learning based approach to overcome this challenge. Our approach can effectively transfer a DRL-based HVAC controller trained for the source building to a controller for the target building with minimal effort and improved performance, by decomposing the design of neural network controller into a transferable front-end network that captures building-agnostic behavior and a back-end network that can be efficiently trained for each specific building. We conducted experiments on a variety of transfer scenarios between buildings with different sizes, numbers of thermal zones, materials and layouts, air conditioner types, and ambient weather conditions. The experimental results demonstrated the effectiveness of our approach in significantly reducing the training time, energy cost, and temperature violations.
This paper studies a scalable control method for multi-zone heating, ventilation and air-conditioning (HVAC) systems to optimize the energy cost for maintaining thermal comfort and indoor air quality (IAQ) (represented by CO2) simultaneously. This problem is computationally challenging due to the complex system dynamics, various spatial and temporal couplings as well as multiple control variables to be coordinated. To address the challenges, we propose a two-level distributed method (TLDM) with a upper level and lower level control integrated. The upper level computes zone mass flow rates for maintaining zone thermal comfort with minimal energy cost, and then the lower level strategically regulates zone mass flow rates and the ventilation rate to achieve IAQ while preserving the near energy saving performance of upper level. As both the upper and lower level computation are deployed in a distributed manner, the proposed method is scalable and computationally efficient. The near-optimal performance of the method in energy cost saving is demonstrated through comparison with the centralized method. In addition, the comparisons with the existing distributed method show that our method can provide IAQ with only little increase of energy cost while the latter fails. Moreover, we demonstrate our method outperforms the demand controlled ventilation strategies (DCVs) for IAQ management with about 8-10% energy cost reduction.
In the era of digitalization, utilization of data-driven control approaches to minimize energy consumption of residential/commercial building is of far-reaching significance. Meanwhile, A number of recent approaches based on the application of Willems fundamental lemma for data-driven controller design from input/output measurements are very promising for deterministic LTI systems. This paper addresses the key noise-free assumption, and extends these data-driven control schemes to adaptive building control with measured process noise and unknown measurement noise via a robust bilevel formulation, whose upper level ensures robustness and whose lower level guarantees prediction quality. Corresponding numerical improvements and an active excitation mechanism are proposed to enable a computationally efficient reliable operation. The efficacy of the proposed scheme is validated by a numerical example and a real-world experiment on a lecture hall on EPFL campus.
Adaptive model predictive control (MPC) robustly ensures safety while reducing uncertainty during operation. In this paper, a distributed version is proposed to deal with network systems featuring multiple agents and limited communication. To solve the problem in a distributed manner, structure is imposed on the control design ingredients without sacrificing performance. Decentralized and distributed adaptation schemes that allow for a reduction of the uncertainty online compatibly with the network topology are also proposed. The algorithm ensures robust constraint satisfaction, recursive feasibility and finite gain $ell_2$ stability, and yields lower closed-loop cost compared to robust distributed MPC in simulations.
Neural networks have been increasingly applied for control in learning-enabled cyber-physical systems (LE-CPSs) and demonstrated great promises in improving system performance and efficiency, as well as reducing the need for complex physical models. However, the lack of safety guarantees for such neural network based controllers has significantly impeded their adoption in safety-critical CPSs. In this work, we propose a controller adaptation approach that automatically switches among multiple controllers, including neural network controllers, to guarantee system safety and improve energy efficiency. Our approach includes two key components based on formal methods and machine learning. First, we approximate each controller with a Bernstein-polynomial based hybrid system model under bounded disturbance, and compute a safe invariant set for each controller based on its corresponding hybrid system. Intuitively, the invariant set of a controller defines the state space where the system can always remain safe under its control. The union of the controllers invariants sets then define a safe adaptation space that is larger than (or equal to) that of each controller. Second, we develop a deep reinforcement learning method to learn a controller switching strategy for reducing the control/actuation energy cost, while with the help of a safety guard rule, ensuring that the system stays within the safe space. Experiments on a linear adaptive cruise control system and a non-linear Van der Pols oscillator demonstrate the effectiveness of our approach on energy saving and safety enhancement.