Design and Testing of a Demand Response Q- Learning Algorithm for a Smart Home Energy Management System

Angano, Walter ; Musau, Peter ; Wekesa, Cyrus Wabuge (2021)
xmlui.dri2xhtml.METS-1.0.item-type
Article

Growth in energy demand stimulates a need to meet this demand which is achieved either through wired solutions like investment in new or expansion of existing generation, transmission and distribution systems or non-wired solutions like Demand Response (DR). This paper proposes a Q-learning algorithm, an off-policy Reinforcement Learning technique, to implement DR in a residential energy system adopting a static Time of Use (ToU) tariff structure, reduce its learning speed by introducing a knowledge base that updates fuzzy logic rules based on consumer satisfaction feedback and minimize dissatisfaction error. Testing was done in a physical system by deploying the algorithm in Matlab and through serial communication interfacing the physical environment with the Arduino Uno. Load curve generated from appliances and ToU data was used to test the algorithm. The designed algorithm minimized electricity cost by 11 % and improved the learning speed of its agent within 500 episodes

Éditeur
IEEE PES/IAS PowerAfrica
Collections:

Preview

Nom:
Angano walter.pdf



Fichier(s) constituant ce document

Thumbnail
Thumbnail

Les fichiers de licence suivants sont associés à ce document :

Attribution-NonCommercial-NoDerivs 3.0 United States
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Attribution-NonCommercial-NoDerivs 3.0 United States