Topological limits to the parallel processing capability of network architectures

Petri, Giovanni, Musslick, Sebastian, Dey, Biswadip, Ozcimder, Kayhan, Turner, David, Ahmed, Nesreen K, Willke, Theodore L and Cohen, Jonathan D (2021) Topological limits to the parallel processing capability of network architectures. NATURE PHYSICS, 17 (5). p. 659.

Abstract

The ability to learn new tasks and generalize to others is a remarkable characteristic of both human brains and recent artificial intelligence systems. The ability to perform multiple tasks simultaneously is also a key characteristic of parallel architectures, as is evident in the human brain and exploited in traditional parallel architectures. Here we show that these two characteristics reflect a fundamental tradeoff between interactive parallelism, which supports learning and generalization, and independent parallelism, which supports processing efficiency through concurrent multitasking. Although the maximum number of possible parallel tasks grows linearly with network size, under realistic scenarios their expected number grows sublinearly. Hence, even modest reliance on shared representations, which support learning and generalization, constrains the number of parallel tasks. This has profound consequences for understanding the human brain’s mix of sequential and parallel capabilities, as well as for the development of artificial intelligence systems that can optimally manage the tradeoff between learning and processing efficiency.

Actions (login required)

Edit Item Edit Item