Thoughts about Artificial-Life Simulations
In one of my classes this semester, we programmed little agents in an artificial world to do (hopefully) interesting things. My group focused on exploring disease spreading patterns. We picked a few variables: How fast does the disease spread? Are the agents attracted to one another? Can the agents accurately observe whether other agents are sick? And so on. We built a pretty simple simulation of this artificial world using the excellent MASON simulator. Here's an example of what it looks like:
The green circles are food. The other circles are agents, who are sick (red) or healthy (blue). An agent is perceived as sick by other agents only if it has another smaller dot next to it. The bar above each agent shows the agent's energy; and the agent dies if the energy drops to zero. Agents can eat food to get energy, and they burn more energy per step if they are sick.
Overall, the simulation we built was rather unsatisfying. In this post I'd like to dwell on the reasons why.
We ended up having a lot of parameters (constants), especially for our flocking behavior. How much are agents attracted to food? How much are they repelled by sick agents? How much randomness do we add in their motion? We had about 7 parameters just for the agents' motion, not including simulation-level parameters such as disease type and observability. With this many parameters, there was no systematic way to find good settings for all of them. Instead, we ended up tweaking them until the simulation "looked right". This has been like groping around in the dark, with no clear idea of what we were looking for, and with no way to tell which parameter values were better. Often adjusting the parameters gave us some behavior we wanted, but broke many other behaviors that were working before.
This explosion in the number of parameters seems to be unavoidable in any non-trivial simulation. How might we overcome it? When we don't know the value of a parameter, there should be a way to find a good value automatically. This is where machine learning and search (including evolutionary algorithms) come in. But in order to use any optimization technique, we would need to define what "good" means -- what effect the optimizer should strive to produce. Could we take inspiration from nature, and try to maximize some evolutionary criterion, like the number of offspring an agent has? That would require giving our agents the ability to reproduce. And even in nature, it isn't clear what the "fitness function" is. For example it is not clear that sexual reproduction is better than asexual reproduction, if we are trying to maximize fitness. Yet advanced organisms all use sexual, rather than asexual reproduction. Another example (that Dan Dennett likes to give): Going to college actually reduces your "fitness" -- you will have, on average, fewer children than someone who hasn't gone to college. Yet most of us think that going to college is a good idea ;-) To sum things up, nature is complicated, and it is not clear what "fitness function" evolution is optimizing. Going back to our simulation, we could just optimize some criterion of our choice, instead of trying to emulate nature. But then we might miss out on whatever interesting behaviors could emerge if we had chosen a different criterion to optimize.
Another pet peeve of mine is that we had to hard code the agents' desires, such as the desire to look for food. It would have been more satisfying to give the agents a basic desire (survive!), and let them figure out what to do to achieve it. But how? It seems impossible without giving the agents a general ability to learn, which is beyond what machine learning can do today. In nature, we have basic needs like thirst and hunger, and we also have more complicated drives, like ambition. These needs and drives are not computed in a nice "Needs-and-Drives" module in our brain -- they are the result of a complicated interplay of forces all throughout our bodies. It is very hard to tell what should be "hard coded" in the architecture of an agent, and what should be left for the agent to learn on its own.
Just like we hard coded desires, we also hard coded living costs, such as how much energy the agents burned in each time step. It would have been more satisfying if the agents consumed just as much energy as they needed for the actions they were performing. Computation in nature has its costs: A bigger brain needs more energy to run, so you don't get compute power for free. Reflexes bypass the brain, so latency matters. These aspects are hard to capture in a simulation where time moves in discrete steps. I have a vague idea of charging agents based on the computations they perform (number of instructions executed; bytes of memory used), but it is far from something I could sit down and implement.
So what needs to happen for artificial life simulations to become more than simple toys?
-
Find a way to navigate the huge parameter space automatically.
-
Figure out a meaningful fitness function.
-
Make the costs incurred by agents more realistic.
-
Find a more realistic way to represent time.
I'd love to read more about this and see what other people have come up with.