Switching between Representations in Reinforcement Learning
Harm van Seijen‚ Shimon Whiteson and Leon Kester
Abstract
This chapter presents and evaluates an on-line representation selection method for factored MDPs. The method addresses a special case of the feature selection problem that only considers certain sub-sets of features, which we call candidate representations. A motivation for the method is that it can potentially deal with problems where other structure learning algorithms are infeasible due to a large degree of the associated dynamic Bayesian network (DBN). Our method uses switch actions to select a representation and uses off-policy updating to improve the policy of representations that were not selected. We demonstrate the validity of the method by showing for a contextual bandit task and a regular MDP that given a feature set containing only a single relevant feature, we can find this feature very efficiently using the switch method. We also show for a contextual bandit task that switching between a set of relevant features and a subset of these features can outperform the performance of both individual representations, since the switch method combines the fast performance increase of the small representation with the high asymptotic performance of the large representation.