Conference Editor

Jianshun Zhang; Edward Bogucz; Cliff Davidson; Elizabeth Krietmeyer

Keywords:

Deep Reinforcement Learning, HVAC Optimal Control, Energy Efficiency

Location

Syracuse, NY

Event Website

http://ibpc2018.org/

Start Date

24-9-2018 1:30 PM

End Date

24-9-2018 3:00 PM

Description

Model-based optimal control (MOC) methods have strong potential to improve the energy efficiency of heating, ventilation and air conditioning (HVAC) system. However, most existing MOC methods require a low-order building model, which significantly limits the practicability of such methods. This study develops a novel model-based optimal control method for HVAC supervisory-level control based on the recently-proposed deep reinforcement learning (DRL) framework. The control method can directly use whole building energy model, a widely used flexible building modelling method, as the model and train an optimal control policy using DRL. By integrating deep learning models, the proposed control method can directly take the easily-measurable parameters, such as weather conditions and indoor environment conditions, as the input and controls the easily-controllable supervisory-level control points of HVAC systems. The proposed method is tested in an office building to control its radiant heating system. It is found that a dynamic optimal control policy can be successfully developed, and better heating energy efficiency can be achieved while maintaining the acceptable indoor thermal comfort. However, the “delayed reward problem” is found, which indicates the future work should firstly focus on the effective optimization of the deep reinforcement learning.

Comments

If you are experiencing accessibility issues with this item, please contact the Accessibility and Inclusion Librarian through lib-accessibility@syr.edu with your name, SU NetID, the SURFACE link, title of record, and author & and reason for request.

DOI

https://doi.org/10.14305/ibpc.2018.ec-1.01

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.

COinS
 
Sep 24th, 1:30 PM Sep 24th, 3:00 PM

A Deep Reinforcement Learning Method for Model-based Optimal Control of HVAC Systems

Syracuse, NY

Model-based optimal control (MOC) methods have strong potential to improve the energy efficiency of heating, ventilation and air conditioning (HVAC) system. However, most existing MOC methods require a low-order building model, which significantly limits the practicability of such methods. This study develops a novel model-based optimal control method for HVAC supervisory-level control based on the recently-proposed deep reinforcement learning (DRL) framework. The control method can directly use whole building energy model, a widely used flexible building modelling method, as the model and train an optimal control policy using DRL. By integrating deep learning models, the proposed control method can directly take the easily-measurable parameters, such as weather conditions and indoor environment conditions, as the input and controls the easily-controllable supervisory-level control points of HVAC systems. The proposed method is tested in an office building to control its radiant heating system. It is found that a dynamic optimal control policy can be successfully developed, and better heating energy efficiency can be achieved while maintaining the acceptable indoor thermal comfort. However, the “delayed reward problem” is found, which indicates the future work should firstly focus on the effective optimization of the deep reinforcement learning.

https://surface.syr.edu/ibpc/2018/EC1/1

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.