Skip to main content

Benchmarking Predictive Coding Networks – Made Simple

Luca Pinchetti‚ Chang Qi‚ Oleh Lokshyn‚ Cornelius Emde‚ Amine M'Charrak‚ Mufeng Tang‚ Simon Frieder‚ Bayar Menzat‚ Gaspard Oliviers‚ Rafal Bogacz‚ Thomas Lukasiewicz and Tommaso Salvatori

Abstract

In this work, we tackle the problems of efficiency and scalability for predictive coding networks (PCNs) in machine learning. To do so, we propose a library that focuses on performance and simplicity, and use it to implement a large set of standard benchmarks for the community to use for their experiments. As most works in the field propose their own tasks and architectures, do not compare one against each other, and focus on small-scale tasks, a simple and fast open-source library, and a comprehensive set of benchmarks, would address all of these concerns. Then, we perform extensive tests on such benchmarks using both existing algorithms for PCNs, as well as adaptations of other methods popular in the bio-plausible deep learning community. All of this has allowed us to (i) test architectures much larger than commonly used in the literature, on more complex datasets; (ii) reach new state-of-the-art results in all of the tasks and dataset provided; (iii) clearly highlight what the current limitations of PCNs are, allowing us to state important future research directions. With the hope of galvanizing community efforts towards one of the main open problems in the field, scalability, we will release the code, tests, and benchmarks.

Book Title
Proceedings of the 13th International Conference on Learning Representations‚ ICLR 2025‚ Singapore‚ 24–28 April 2025
Year
2025