What you're seeing
Each tank runs an independent Bayesian agentAn agent that updates its beliefs using Bayes' theorem — combining prior beliefs with new evidence to form a posterior probability. The more evidence, the more confident the belief. simulation. Agents (nodes) run experiments to determine which of two treatments is better, share results with their network neighbors, and update beliefs accordingly.
Green particles carry honest evidence between agents. Red particles are false testimony from the agnotologistA "manufacturer of ignorance" — an agent who never updates their beliefs and always reports misleading evidence to their neighbors, attempting to prevent the network from discovering the truth. (the red node). Compare hub-biased tanks against peripheral ones — the hub agent poisons beliefs far more effectively.
Node size reflects degree centralityA measure of how many connections a node has relative to the maximum possible. High centrality = hub position = more influence over the network's collective beliefs.. Color shifts from blue (misled) through neutral to green (approaching truth).
Real-world parallels
Tobacco industry (1950s–90s): Industry-funded scientists at hub positions in citation networks published doubt about the link between smoking and cancer, delaying public consensus by decades.
Pharmaceutical lobbying: Key opinion leaders paid by companies to endorse drugs occupy central positions in medical information networks, shaping prescribing behavior across entire communities.
Network Cellular Automata
Traditional cellular automata run on a rigid 2D grid where every cell has exactly 8 neighbors. Here, we've mapped these simple rules (like Conway's Game of LifeA node is born if exactly 3 neighbors are alive, survives if 2 or 3 are alive, and dies otherwise.) onto complex network topologies.
By using Fractional Rules (e.g., a node comes to life if 25%–40% of its neighbors are alive), we reveal something profound about structures: on scale-free networks, massive hubs frequently die of "overpopulation" because they have too many alive neighbors, while peripheral chains die of "underpopulation". Life struggles to sustain itself on irregular graphs unless the structure inherently protects cooperative clusters.
Strategic behavior on networks
These agents don't update beliefs — they choose strategiesA complete plan of action: Cooperate (always help), Defect (always exploit), Tit-for-Tat (copy your opponent's last move), or Pavlov (win-stay, lose-switch). Strategies compete for survival.. Each round, every agent plays a game with each neighbor. After all games are played, agents imitate the most successful neighbor's strategy — a process of evolutionary selectionStrategies that earn higher payoffs spread through the population, while unsuccessful strategies die out. This mirrors natural selection but for behavioral rules rather than genes..
In the Prisoner's DilemmaTwo players simultaneously choose to Cooperate or Defect. Mutual cooperation pays 3 each. Mutual defection pays 1 each. But if one defects while the other cooperates, the defector gets 5 and the cooperator gets 0. The temptation to defect is the core tension., cooperation is fragile but can survive in spatial networks where cooperators form clusters. Network topology determines whether cooperation or defection dominates — a finding with implications for institutional design.
Why topology matters
Grid/lattice: Cooperators survive by forming protective clusters. Defectors dominate edges but can't penetrate dense cooperative cores.
Scale-free networks: Hubs amplify whatever strategy they adopt. A cooperative hub can sustain cooperation network-wide; a defecting hub can collapse it.
Well-mixed (complete): No spatial structure means no shelter for cooperators. Defection typically dominates — the classic "tragedy of the commons."
The Evolving Society
This network represents a governed digital society. Nodes are agents colored by their ideological faction (e.g., Progressive, Conservative). Their topology, size, and connections are entirely shaped by the LLM Assembly's passed proposals.
Dynamic Restructuring: Watch as the network physically morphs into new topologies (like Grids or Stars), elevates specific factions into massive Hubs, or violently severs edges to isolate groups based on democratic votes.
How the LLM Assembly works
50 AI agents — representing different political and structural ideologies — deliberate on proposals in a parliamentary setting. Agents form caucuses, exchange arguments, shift stances, and ultimately vote.
This models how collective decision-making emerges from diverse, interacting agents with different priors, values, and reasoning styles — and how the structure of deliberation (who talks to whom, when, for how long) shapes outcomes.
About Aquavect
Aquavect is an interactive, open-access simulation aquarium for computational agents. Users can watch, configure, and experiment with different species of agents as they interact within and across network structures.
The platform houses four species: Bayesian agents that update beliefs via evidence sharing, network automata that demonstrate emergent complexity from simple rules on graphs, strategic agents that model evolutionary game theory (cooperation and defection), and LLM-based agents that deliberate in natural language on civic proposals.
Origins: agnotology research
Aquavect grew out of research into agnotology — the study of manufactured ignorance. The Bayesian agent system is grounded in a 211,000-simulation study showing that a biased agent's network position dramatically amplifies its ability to suppress truth, with a single well-placed bad actor equaling 3–4 peripheral ones in epistemic damage (degree centrality Cohen's d ≈ 1.42). The platform expanded from there into a broader toolkit for exploring agent-based dynamics across multiple paradigms.
Further reading
About the author
Aquavect was by Rouzbeh Rezaei Sanjabi. To read the full papers, learn more about these simulations, or get in touch, please visit rouzbehrezaei.com.