Jesse Lawson

buy me a coffee ☕ / home / blog / tutorials / portfolio / contact

CARES Project

Jan 1, 0001 - 13 minute read -

The Cellular Agent Research Experiment System (CARES) is a modular, programmable toolset designed to study autonomous agents in a discrete, cellular environment.

Quickstart

  1. Clone the repository: git clone https://github.com/jesselawson/cares.git
  2. Install dependencies: cd cares && pip install -r requirements.txt
  3. Run the example experiment: python experiments/example1.py
  4. Study the output in the experiments/example1 folder. You’ll have a compiled gif of all state configurations at each time step, and *.jpg copies of the state configuration at each time step.

Documentation

CARES is comprised of a System, which is the core logic behind a grid of cells, Agents, which are programmable autonomous entities, and Entities, which are things that Agents can interact with–but are not Agents themselves.

A System is composed of a number of starting Agents, some starting conditions and rules, and customizable Entities.

Experiments are ran via the command line (e.g., python my_experiment.py). When your experiment has finished simulating, you’ll get a folder containing a JPEG of the state configuration at every time step and a GIF animation of all the state configuration snapshots.

The System

Coming soon

The Agents

An Agent is an entity that is comprised of three primary elements:

  1. One or more Genetic Characteristics, which are combined with mates and passed on to offspring (and thus, certain genetic characteristics are passed on–or not);
  2. A Brain, which is where all input from sensors are gathered, goals are evaluated, and decisions are made.
  3. One or more Sensors, which serve as input parameters to the Brain before each update step.

Associations

Everything an agent knows is one of three types of Associations:

  1. Hypothetical Associations, which are low-priority units of knowledge about the world;
  2. Learned Associations, which are medium-priority units of knowledge about the world;
  3. Experienced Associations, which are high-priority units of knowledge about the world.

A Agent’s behavior is governed by one or more subroutines that comprise that agent’s Model. For example, say we have a collection of subroutines that we call “Apollo.” The agent is an “Apollo” model.

The reason that behavior is governed by subroutines is explained more in the subroutine section.

Subroutines

A subroutine is a set of logic computations that can be assigned to either an Agent or the System in which agents are being simulated. Subroutines are implementations of a template subroutine class, and come in three flavors:

  • Agent Sensor Subroutines, which generate sensory data
  • Agent Subroutines, which generate responses (like where to go and what to do) via sensory transduction
  • System Subroutines, which govern non-agent aspects of the environment.

Technically, a Subroutine is a class object that has its own dynamic set of variables, including it’s own training_data array for whatever this subroutine is designed to do.

What type of training_data a subroutine stores and trains, including how it trains its predictive models, is based on whatever [sensory]({% link docs/agents.md %}) inputs that said subroutine is designed to read from.

Each Agent can have one of many “subroutines”, and new experiments can mix and match different subroutines together.

Subroutines are stored in the System Subroutine Library. This is a folder full of each subroutine function.

Groups of subroutines, called a Subroutine Collection, are created upon system instantiation.

New subroutines should be registered with the System object, since it is the System that spawns the Agents.

Q: How do we create agents with different subroutine groups?

The system has to be able to create a collection of subroutines and then apply that collection of subroutines to a particular agent that it is creating.

If we have System.

When each agent is making a decision, it process through its subroutines.

The subroutines can work together OR they can just modify values blindly.

Agent Sensor Subroutines

To help understand subroutines, let’s use the nose.

Suppose we have agents in an experiment that can share olifactory data based on sensor data that yielded good or bad sources of food, depending on a pleasurability index derived from the sensory data.

The first subroutine we would need for this is the agent sensor input subroutine, which we might call AgentSensorSubroutineSmell. In this, I decide to find anything that gives off an odor in the agent’s surrounding cells and record that information. It might look something like this:

[15, 33, smell01],
[15, 34, smell02]

Here I have two records, each following the format {x, y, what}. My thinking is that I only want to register what smell is where, and not any subjective association about that smell (because that’s not the job of the sensor subroutine).

With the agent sensor subroutine created, it’s time to act on this data. I would create a second agent subroutine called AgentSubroutineSmell and program it to go through sensor data, look at new data for surrounding smells, and then determine how to take this sensory data and generate new [Associations]({% link docs/agents.md %}).

A subroutine is essentially a set of functions that takes in an Agent, performs some logical computations, and then modifies some element(s) of that Agent.

For example, we know that Agents record all observations as a dynamically columned database. If we decide to add some environmental feature in the future, we could create a subroutine that checks whether there are observations of that environmental feature existing in this particular agent’s observations.json file (a tinyDB db file). If there does exist records, then we know this agent version is designed to interact with those environmental features. If there are no results returned, then this agent has not observed those environmental features–or it is not designed to–and this subroutine is just skipped.

Agent Subroutines

To do

System Subroutines

To do

Theoretical Framework

We need the tonic of wildness… At the same time that we are earnest to explore and learn all things, we require that all things be mysterious and unexplorable, that land and sea be indefinitely wild, unsurveyed and unfathomed by us because unfathomable. We can never have enough of nature.” ― Henry David Thoreau, Walden: Or, Life in the Woods

Building CARES was heavily inspired by my hypothesis that human evolution has been contingent on our ability to share information faster and more efficiently over time and with greater access to technology.

Essentially:

  • Human evolution is based on the sharing of knowledge.

  • If knowledge were quantized, each “unit” of knowledge would be an association between one or more other “units” of knowledge.

  • The more knowledge you share, the more you evolve.

“Ethnographically, this diversity [‘of social organizations, group sizes, kinship structures, and mating patterns’] is at least partially rooted in culturally-acquired and widely shared social rules” (Henrich, 2011).

Sharing of experiences has led to the development of social groups and culture, and natural selection favored genes that resulted in more pro-social behavior, which in turn resulted in generation after generation of offspring with “sociogenetically superior” traits.

My theory is that the sharing of knowledge combined with random variations in genetic characteristics passed down to offspring make up the formula for evolution. Oriented toward artificial intelligence, I believe that, eventually, sentience can be synthesized from the correct ratio of shared knowledge, genetic characteristics, and environmental factors.

In this way, the chief “goal” of a CARES experiment is to replicate observable characteristics that yield an emergence of sentience.

Over the last million years or so, people evolved the ability to learn from each other, creating the possibility of cumulative, cultural evolution. Rapid cultural adaptation also leads to persistent differences between local social groups, and then competition between groups leads to the spread of behaviours that enhance their competitive ability. Then, in such culturally evolved cooperative social environments, natural selection within groups favoured genes that gave rise to new, more pro-social motives. Moral systems enforced by systems of sanctions and rewards increased the reproductive success of individuals who functioned well in such environments, and this in turn led to the evolution of other regarding motives like empathy and social emotions like shame. (Boyd & Richerson, 2009)

What is Emergent Sentience?

A being is considered sentient if it has the capacity to feel or perceive. These feelings and perceptions generally come from sensory input, but not all beings with sensory inputs are sentient.[^sentience] Or are they? Don’t they have objective experiences too, since we are fairly certain they’re not all connected to a hive mind?

Unless, of course, we consider that sentience is derived from our understanding of reality, and that in trying to describe a being’s sentience we have to admit that what we are ascribing to this being is a set of conditions derived from the experience of our own species. In other words, what if sentience is both 1) the capacity to perceive and experience objectively, and 2) a subjective condition? By admitting that our own concept of what is sentient and what is not is founded on the presupposition that we humans are sentient, we are using the lens of “human” to determine whether other beings are sentient–and this may mean that our whole concept of sentience is wrong.

In Facing Up to the Problems of Consciousness (1996), philosopher David Chalmers1 asks how objective experiences come to be objective experiences:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? … It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

That our physical interactions with the world produce an objective experience within us represents a rich intersection between physicists and philosophers that has existed for some time. In 1896, the mathematician Ernst Zermelo theorized that the Second Law of Thermodynamics (“the total entropy of an isolated system always increases”) was absolute,2 and supported this theory with Poincaré’s Recurrence Theorem which states that, eventually, certain systems will return to their initial state.

To counter this, the Austrian physicist Ludwig Boltzmann argued that our universe started for some unknown reason in a low-entropy state. Later on, his assistant Ignaz Schuutz theorized that the vast majority of the universe’s existence is in a state of featureless heat death, and that it is only through a very rare thermal fluctuation causing atoms to repel and attract in new configurations that our entire observable universe comes to be. This “Boltzmann Universe” concept gave rize to the philosophical concept of a Boltzmann brain: a self-aware entity that comes into existence as a result of rare, random fluctuations of a state of thermodynamic equilibrium.

In other words, the entire observable universe as we know it can be traced back to a single random fluctuation in an otherwise “stable” system. This means that, theoretically, there could exist a homogenous Newtonian puddle of goop in which a burst of heat from a nearby volcanic eruption could cause its atoms to attract and repel one another in such a way as to result, eventually, in a functioning human brain. Of course, the implication here is that the complexity of such a resultant structure imputes a seemingly multidimensional exponential relationship between the resultant complexity of such a system and the likelihood of the exact order of random fluctuations and activity to result in said system.

That likelihood is very small, and yet here we are: sentient, philosophical creatures stumbling through life communicating with each other. It goes without saying then that just because something is ridiculously improbable does not mean that it is impossible. Yet despite this logical truth, we are still left with the largest question of all: “Why?”

In 1983, philosopher Joseph Levine coined the term explanatory gap to describe the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced. To illustrate, he wrote the following example: “Pain is the firing of C fibers”–meaning, in the most literal sense possible, that the concept of pain is due to the firing of Group C nerve fibers in the Central and Peripheral Nervous Systems. It is in pointing out this physical fact that one arrives at a conundrum: while it might be valid in a physiological sense, it does not help us to understand how pain feels and why we experience it the way we do.

This is known as the hard problem of consciousness; the act of thinking and feeling is the result of an enormous amount of information processing, and yet in all these physical processes (hands touching, eyes seeing, tongues tasting, nerves firing, glands secreting, etc.) there is subjective experience. How is this possible–and just what exactly is the “this” that we are talking about?

We consider ourselves to be sentient not only because we can have these subjective experiences, but also because we can ponder about why and how these experiences are possible. In a reality that has produced a universe from the random fluctuations of a thermodynamic equilibrium, one might be keen on feeling that humanitity is somehow special or unique,* when in fact even these thoughts for which we have no language to describe are only further examples of the downstream effects of a series of random initial conditions at the start of our universe.

*: There is an underlying challenge to the Intelligent Design argument here and, ironically, a counterargument is formed right in the mission of this project itself. If it is our goal to create a replication of conditions and associations that, over time, lead to the emergence of some kind of sentience, aren’t we hypothesizing that intelligent design is at least plausible?

This is where the study of emergent sentience breaks off from the study of emergent reality. Whereas astrophysicists and cosmologists will take these questions and start diving deep into the quantum particulars of what has caused and continues to cause the universe to be and expand, the field of emergent sentience is concerned with a very small sample of those fields, which can be summarized like this:

  • How does a puddle of Newtonian goop evolve into sentient life forms?

Put another way:

  • What conditions and events have to occur for a single-celled organism to be the foundation of a sentient life form?

To tackle these questions, experiments in artificial intelligence can be used to mimic a universe like our own but in a discrete way. The idea is that by mimicking the emergence of sentience in a discrete system, the sum total of all possible things that could happen approaches infinity in the same way that the sum total of all possible things that could happen in our own reality, and we are left with a system that is, for all intents and purposes, a new universe.

The anthropic principle, which states that the only universes that are observed are those with conditions and properties that have allowed observers to exist and observe them, is a guiding thought here. In fact, the Nobel laureate Steven Weinberg refers to it as a “turning point” in the modern cosmology because when combined with string theory it “may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator."3

The rest of this article is coming soon!

License

This article references and uses material from the following Wikipedia articles, all of which are released under the Creative Commons Attribution-Share-Alike License 3.0.

References

Boyd, R. & Richerson, P. J. (2009). Culture and the evolution of human cooperation. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781880/

Henrich, J. (2011). A cultural species: How culture drove human evolution. https://www.apa.org/science/about/psa/2011/11/human-evolution)


  1. See David Chalmers’ (1996) “Facing Up to the Problem of Consciousness. “JCS, 2(3), 200-219. ↩︎

  2. See S. G. Brush’s (1996) “Nebulous Earth: A History of Modern Planetary Physics,” 129. ↩︎

  3. Weinberg, S. (2007). “Living in the multiverse.” In B. Carr. Universe or multiverse?. Cambridge University Press. ↩︎