Learning and tuning fuzzy logic controllers through reinforcements
H. Berenji, and P. Kehdkar. IEEE transactions on neural networks, 3 (5):
724-740(1992)
Abstract
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (a) learns and tunes a fuzzy logic controller even when only weak reinforcement such as a binary failure signal, is available; (b) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (c) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (d) learns to produce real-valued control actions
%0 Journal Article
%1 Learning92Hamid
%A Berenji, Hamid R.
%A Kehdkar, Pratap
%D 1992
%J IEEE transactions on neural networks
%K fuzzy learning logic reinforcements
%N 5
%P 724-740
%T Learning and tuning fuzzy logic controllers through reinforcements
%V 3
%X This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (a) learns and tunes a fuzzy logic controller even when only weak reinforcement such as a binary failure signal, is available; (b) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (c) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (d) learns to produce real-valued control actions
@article{Learning92Hamid,
abstract = {This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (a) learns and tunes a fuzzy logic controller even when only weak reinforcement such as a binary failure signal, is available; (b) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (c) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (d) learns to produce real-valued control actions},
added-at = {2009-11-26T10:57:16.000+0100},
author = {Berenji, Hamid R. and Kehdkar, Pratap},
biburl = {https://www.bibsonomy.org/bibtex/2080344b23404c90f8595a028b6af2d9d/mediadigits},
file = {:IEEEXplore.pdf:PDF},
groups = {public},
interhash = {25b1e9705fc55b18dbacb60487cec255},
intrahash = {080344b23404c90f8595a028b6af2d9d},
journal = {IEEE transactions on neural networks},
keywords = {fuzzy learning logic reinforcements},
number = 5,
pages = {724-740},
timestamp = {2011-03-13T18:57:11.000+0100},
title = {Learning and tuning fuzzy logic controllers through reinforcements},
username = {mediadigits},
volume = 3,
year = 1992
}