davidrmiller / biosim4

Biological evolution simulator

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Confusion about signals

mfcochauxlaberge opened this issue · comments

I watched the video on YouTube and it was great. I watched it a few times and I'm trying to re-implement the simulator myself.

I had a question about how signals travel in the neural network. My understanding goes like this:

  1. Signals come from input neurons, when it is activated (whatever the reason). The strength of a signal can be 0, 1, or anything in between.
  2. Signals travel through connections and the strength is multiplied by the connection's strength.
  3. The strength of the signal leaving a neuron is tanh(sum(inputs)).
  4. When a signal gets to an output neuron, then this action represented by the neuron has some chance to be executed. The greater the strength, the more likely that action will be chosen.

I currently have the following questions:

  1. How does it work when an internal neuron is connected to itself? In tanh(sum(inputs)), we can't include the self connection's input, because it does not exist yet. But once the output is activated, then a new input is activated. I don't know how to deal with this recursivity.
  2. What does it mean when an output neuron receives a negative signal? Does everything below 0 mean no activation at all?
  3. At around the 15-minute mark in the video, an example neural network is shown. (see the image below) It says that the MvE neuron is more likely to be activated when the path forward is free to go (LPf input). But when it isn't then the other output neuron (Mrn) becomes more likely to be chosen. How can it be active? I don't see anything connected to it. The only input I see goes to MvE, and MvE has no output. The video says that the signal comes from the internal neuron, but where does the signal come from?

(image for question 3)
image

Thank you!

(I'm still reading the code, so if I find an answer I'll post it here. I'm not comfortable with C++ unfortunately.)

Hi @mfcochauxlaberge, yes you have a good understanding of the existing code. There are many different ways to convert the signal levels into actions. You could devise other, possibly better ways. To answer your questions:

  1. Each internal neuron gets updated once per simulator cycle and its output value is latched and persists until the next simulator cycle. A neuron that feeds itself uses the latched value from the previous cycle as its input, then it computes and latches a new output value.

  2. The simulator version in the repository interprets negative probabilities as zero probability.

  3. I think it might have worked a little differently than what I said in the video. With no external signal driving it, the internal neuron may have become saturated and supplied a constant mid-level drive to both the MvE and Mrn neurons. When the LPf sensor neuron detected an unobstructed path, it increased the level of the drive to the MvE neuron and caused it to be the dominant output action. When the LPf neuron detected an obstacle, it decreased the drive to the MvE neuron to a level below the constant drive to the Mrn neuron.

Good luck with your own experiments. Let us know how it goes.

@davidrmiller Thanks a lot for the quick response!

I'll leave this issue open for now as I still have a small question about number 3, but I'll try to get the answer from the code. If I can't get it, I'll ask it here.

I think I get it now.

I was still wondering where that signal could come from if no input is attached to the internal neuron.

But it seems like there is this concept of "driven", where a neuron can be marked as "non-driven" and will be given a constant output that will not change. So far my understanding of "driven" was "is getting a signal".

What I understand from the code is that if a neuron is not attached to any input or neuron, it will be given a constant output of 0.5, which is what is happening here.

From genome-neurons.h:

// When a new population is generated and every individual is given a
// neural net, the neuron outputs must be initialized to something:
constexpr float initialNeuronOutput() { return 0.5; }

And from genome.cpp, which is where the driven property is defined:

nnet.neurons.back().driven = (nodeMap[neuronNum].numInputsFromSensorsOrOtherNeurons != 0);

Then I would simply ask: why force an output to neurons that cannot get a signal? If a signal is useful, then evolution would simply make the necessary connections, no?

You got it. At birth, every neuron gets a initial output value by calling initialNeuronOutput(). If a neuron has no inputs, its output value will never change during its lifetime.

The initial neuron output value is defined in a function to make it easier to experiment with different initial values.

In general, artificial neurons are more flexible and trainable if each neuron has a constant, weighted bias signal that it can sum with its other input signals. By giving all newly-born neurons a nonzero output value, then automatically those with no inputs can act as constant bias sources for other neurons, with the expectation that the evolutionary process will adjust the connection weights and select the connections that are useful.

Interesting. I'll keep experimenting. Thanks!