Approximate Solutions for Factored Dec−POMDPs with Many Agents
Frans Oliehoek‚ Shimon Whiteson and Matthijs Spaan
Abstract
Dec-POMDPs constitute a powerful framework for planning in multiagent systems but are provably intractable to solve. Despite recent work on scaling to more agents by exploiting weak couplings in factored models, scalability for unrestricted subclasses remains limited. This paper proposes a factored forward-sweep policy computation method that tackles the stages of the problem one by one, exploiting weakly coupled structure at each of these stages. To enable the method to scale to many agents, we propose a set of approximations: approximation of stages using a sparse interaction structure, bootstrapping off smaller tasks to compute heuristic payoff functions, and employing approximate inference to estimate required probabilities at each stage and to compute the best decision rules. An empirical evaluation shows that the loss in solution quality due to these approximations is small and that the proposed method achieves unprecedented scalability, solving Dec-POMDPs with hundreds of agents. Furthermore, we show that our method outperforms a number of baselines and, in cases where the optimal policy is known, produces near-optimal solutions.