My dissertation proposal was approved in December 2024. What I found interesting is that it took me over 3 years to realize that the starting point with which I originally applied to York University was indeed going to be the focal point of my dissertation. Let me explain:
When I first applied to the Computational Art PhD program at York University, my statement of intent described some of the findings and insights I gained from my masters thesis, completed in 2016. It dealt with the weaving a human relatable narrative around data analysis and its visualization. Data – any data – is by its nature, quantifiable. We, as humans, need to interpret it in a way that makes sense to us, and present a relatable, structured, and logical narrative. This allows us to find useful data patterns, relationships, and connections that help us tell a story with the data within our human realm of experience.
The approach that I contemplated using was combining aspects of Artificial Life (ALife) within data collection and visualization, and integrate them as to provide deeper insights into pattern recognition within data sets. Yet, rather than using the accepted methods of ALife, typically involving variations of cellular automata, flocking systems, and a virtual synthesis of biological behaviors represented in a virtual environment, I began thinking about what would happen if we shifted the perspective and looked at the data from the point-of-view of the ALife agents.
Our default frame of reference is anthropocentrism. It’s how we relate to the world around us, and try to make sense of it in a manner that we, as humans, can understand. It makes sense in a way, since we, as a species, have restructured the world to fit our needs. A key aspect of that understanding involves deciphering the mysteries of Life. How to define it, and how does it work?
Christopher Langton, in his seminal paper on Artificial Life, defined biology as the scientific study of life – on Earth. This is an important caveat because, it’s true that biology can technically study any kind of life, but the only one we can realistically examine is the one on this planet (at least until we boldly go to where no one has gone before). It’s the only example of life available to us. This creates an obstacle because logically, it is impossible to derive general principles from single examples. We can extrapolate, theorize, and make intelligent assumptions, but nothing with absolute certainty.
The Digital Age opened up endless computational resources and power, and the modern field of artificial life was born (thanks again to Christopher Langton for coining the term). The theoretical origins of ALife originated from the works of John von Neumann and Stanislaw Uman in the 1940s and 1950s. Von Neumann’s developed his first theoretical model of self-replicating automata, inspired by Alan Turing’s work on computation and morphogenesis (computational models of biological pattern formation). His work demonstrated that machines could, in principle, reproduce themselves in a computational medium. Ulam introduced cellular automata as a mathematical framework to simulate self-replicating patterns, laying the groundwork for the eventual modern ALife research.
The traditional models of artificial life have historically drawn from biological paradigms, particularly the principles of Darwinian evolution: mutation, reproduction, selection, and survival. The groundbreaking work in computational cellular automata, genetic algorithms, and agent-based simulations, pioneered by Langton, Ray, Holland, Conway and others, have used such metaphors to produce emergent complexity and adaptation in silico. While these approaches have yielded compelling models of life-like behavior, they remain fundamentally constrained by their reliance on biologically derived fitness functions and resource-based competition.
The research focus of my dissertation takes a different approach from these traditional models, where the semblance of life for Artificial Life Entities (ALEs) is created not through competition and survival-based selection, but rather through structural persistence of symbols. The GOLEM (Generative Object Landscape for Emergent Models) framework reimagines the foundations of ALife from the perspective of persistence through symbols, rather than biologically influenced survival-based evolution. This guiding principle supports the goal of symbols to propagate, resonate, and evolve meaning across a digital ecosystem, while promoting the emergence, transformation, and continuity of meaning-bearing structures.
This would translate into allowing ALEs to engage in the following aspects:
- Non-Darwinian evolution: No fitness score or death-based selection
- Symbolic propagation: ALEs absorb, modify, and recontextualize symbols from the terrain and other ALEs
- Structural coherence: Stability emerges through symbol convergence and integration
- Procedural communication: Meaning is not predefined but emerges through dynamic symbol use, sharing, and persistence
GOLEM Framework
To explore communication protocols at the micro-level between ALEs, a framework needs to be created which will facilitate these interactions. GOLEM aims to provide the foundational framework through which ALEs can express behaviors, exchange information, interact with the environment, and adapt and evolve in a digital, virtual world.
The core of GOLEM is made up of a scalable system that makes use of a predefined set of established rules as it starting point. These rules act as the initial directives of the ALEs, yet they allow for an exponential increase in complexity as required. The framework is designed to observe and analyze how symbolic communication, even at its most rudimentary levels, could lead to a path of complex, emergent behaviors within the artificial life entities.
SEAL and BOS
By using a structured communication scheme around a generative object language system, the GOLEM framework incorporates within it systems that have the capability to produce a vast array of meaningful symbols and interactions from a limited set of core elements. These abstract symbolic objects represent the “vocabulary” and syntax to represent the exchange of concepts and procedural instructions, providing ALEs the ability to exhibit emergent behaviors over time.
The components of this abstract language is SEAL (Symbolic Evolutionary Alife Language). The purpose of SEAL is to establish the symbol design and provide a system of encoding and decoding tools that are based on the red, green, blue, and alpha (RGBA) color space channels used to display pixels. This provides a method to embed and recall data, as well as represent it in a visual manner, to observe and quantitively track ALEs as they interact within a digital environment.
The Bitset Overlap System (BOS) loop is the computational system that governs the interactions between the ALEs, the environment, and themselves. Every ALE contains a core set of commands that allow it to engage with the surrouding environment. BOS provides the backend calculations that allow the ALEs to survive and evolve by maintaining coherent patterns of symbolic overlap with their surroundings and other ALEs. Each ALE carries a binary memory, or bitset, that represents its accumulated experience. At every step, it compares this internal structure with the encoded patterns of nearby tiles using bitwise operations. The degree of overlap between these bitsets reflects the agent’s familiarity with its environment, driving its behavior.
An additional layer that connects the SEAL and BOS systems is the application of stigmergic behavior as a feature for the ALEs. Stigmergy, a biological concept introduced by the French biologist Pierre-Paul Grassé in 1959, was observed in the behavior of social insects such as ants and termites. It refers to a process where individual actions leave traces in the environment, which guide the subsequent actions of other individuals. In stigmergic systems, there is no need for direct communication between agents; instead, the environment acts as a medium for information exchange. As an example, ants use pheromone trails to signal pathways to food sources, allowing other ants to follow and reinforce these trains, leading to a complex, collective behavior without centralized control.
In GOLEM, ALEs leave a stigmergic trail when new symbolic information is found in the terrain, or gained by communicating with another ALE. This provides an additional indirect communication layer that presents additional contact points for symbolic communication. This method replaces competition with cooperation, and fitness with persistence, allowing complex, life-like patterns to emerge without predefined objectives. It is both an algorithmic engine and an ecological principle, enabling GOLEM’s world to function as a self-organizing landscape of meaning, where structure, not survival, becomes the defining condition of life.
The abstraction mechanisms integrated in GOLEM are instrumental in the exploration of how artificial life systems can use a structured set of rules to support autonomous behavior, collaboration, adaptation, evolution, and ideally, new and emergent actions.
Application – Back to the Rogue Basics
The GOLEM Framework is currently being developed using the Godot game engine. I’ve selected this game engine due to its small footprint and general ease of use. Other popular game engines were evaluated, but the heavier impact in terms of computing resources was did not outweigh the additional features they offered.
From a visual standpoint, the framework is not supposed to be representative of any specific element, or anthropomorphizing the ALEs. This is the reason why using simple 2D visuals is currently the most effective vehicle to represent this framework. The inspiration to the design comes from the original Rogue game, released in 1980 by Michael Toy and Glenn Wichman, in conjunction with its modern iterations. The reasoning behind this approach is that roguelike games incorporate procedurally generated levels, and provide limitless environmental/open-world exploration. These game mechanics can offer a unique canvas to explore the themes discussed herein. An added significant advantage the roguelike genre has over other video game genres is the ability to scale the game world up or down, depending on the complexity and depth of the procedural parameters set in the generation of the game world. The core game mechanics of roguelike games can be modified and used within a serious gaming framework, and are flexible enough to be adapted to a variety of goals.
Random environment generation relies heavily on procedural content generation (PCG) algorithms. This feature creates a dynamic environment where players and ALEs continuously adapt to new settings and challenges in the game world, making decisions that could have a range of outcomes based on the game’s algorithms. A promising branch that ties to the relationships of components outlined in a roguelike game is Complex Adaptive Systems (CAS).
Complex Adaptive Systems and Emergence
The core principles of CAS focus on the unpredictable dynamics, adaptation, and emergent properties of multiple interconnected components (or agents). These agents tend to be highly dynamic, dispersed, and decentralized, and adapt or learn from their interactions with each other and their environments.
Emergence refers to the process through which larger entities, patterns, or structures arise through interactions among smaller or simpler entities that individually do not exhibit such properties or behaviors. The GOLEM framework is designed as a complex adaptive system, reflecting how such systems adapt and evolve through interactions among their components.
In the next posts, I will dive into the integration of ALEs in the GOLEM framework, and introduce their communication protocols (SEAL).
[NOTE: Directions might pivot as I progress through these posts, depending on the development of the GOLEM/SEAL framework]