Critical Factors in the Performance of HyperNEAT
Thomas van den Berg and Shimon Whiteson
Abstract
HyperNEAT is a popular indirect encoding method for evolutionary computation that has performed well on a number of benchmark tasks. This paper presents a series of experiments designed to examine the critical factors for its success. First, we determine the fewest hidden nodes a genotypic network needs to solve several of these tasks. Our results show that all of these tasks are easy: they can be solved with at most one hidden node and require generating only trivial regular patterns. Then, we examine how HyperNEAT performs when the tasks are made harder. Our results show that HyperNEAT's performance decays quickly: it fails to solve all variants of these tasks that require more complex solutions. Next, we examine the hypothesis that fracture in the problem space, known to be challenging for regular NEAT, is even more so for HyperNEAT. Our results suggest that quite complex networks are needed to cope with even modest fracture and HyperNEAT has difficulty discovering them. Finally, we connect these results to previous experiments showing that HyperNEAT's performance decreases on irregular tasks. Our results suggest irregularity is an extreme form of fracture and that HyperNEAT's limitations are thus more severe than those experiments suggested.