Аннотация
We examine the effectiveness of gradient search
optimization of numeric leaf values for Genetic
Programming. Genetic search for tree-like programs at
the population level is complemented by the
optimization of terminal values at the individual
level. Local adaptation of individuals is made easier
by algorithmic differentiation. We show how
conventional random constants are tuned by gradient
descent with minimal overhead. Several experiments with
symbolic regression problems are performed to
demonstrate the approach's effectiveness. Effects of
local learning are clearly manifest in both improved
approximation accuracy and selection changes when
periods of local and global search are interleaved.
Special attention is paid to the low overhead of the
local gradient descent. Finally, the inductive bias of
local learning is quantified.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)