SANTA FE, N.M. Engineers seeking to divide and conquer tough networking problems with multiple, cooperating software agents got a theoretical boost recently in a paper released by the Santa Fe Institute.
Researchers Yuzuru Sato and James Crutchfield described reinforcement-learning agents that exhibit both competitive and cooperative behaviors. Acting individually without any global programming or direction, the resulting behaviors showed a variety of dynamic types, including quasiperiodicity, stable limit cycles, intermittency and deterministic chaos.
Rather than global functions, the agents were programmed with a model of their environment without sharing knowledge. Nevertheless, collective-learning dynamics emerged, which the team described with coupled-replicator equations.
Coupled-replicator dynamics were originally conceived in terms of evolutionary game theory, but in this study were shown to emerge naturally as a continuous-time limit for reinforcement learning among multiagents.
Using the rock-scissors-papers game interaction as a model, the researchers showed that coupled replicator equations explain the macroscopic behaviors of a network of learning agents. Since high-level dynamics among agents, such as deterministic chaos, were exhibited among even these simple test cases, the authors predict that more complex real-world multiagent systems will sustain useful, controllable collective behaviors too.
The researchers said they plan to quantify the collective functions of large multiagent systems, developing statistical equations of motion to account for fluctuations in the finite models and histories among agents. They also plans to apply structural and information-theoretic analyses to characterize cause and effect among an individual agent's memories that result in sustainable collective functions.