SpiNNakerManchester / sPyNNaker8

The PyNN 0.8 interface to sPyNNaker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Different spike times if population is split.

Christian-B opened this issue · comments

Simple script based on introlabs simple.

import spynnaker8 as sim

sim.setup(timestep=1.0)
sim.set_number_of_neurons_per_core(sim.IF_curr_exp, 2)

pop_1 = sim.Population(4, sim.IF_curr_exp(), label="pop_1")
input = sim.Population(4, sim.SpikeSourceArray(spike_times=[0]), label="input")
input_proj = sim.Projection(input, pop_1, sim.OneToOneConnector(),
synapse_type=sim.StaticSynapse(weight=5, delay=1))
pop_1.record(["spikes", "v"])
simtime = 10
sim.run(simtime)
neo = pop_1.get_data(variables=["spikes", "v"])
sim.end()
spikes = neo.segments[0].spiketrains
print(spikes)
v = neo.segments[0].filter(name='v')[0]
print(v)

--
The neurons in the first partition behave the same as if they are unpartitioned. Ie spike at time 7
The rest spike one timestep earlier. Ie spike at time 6

Voltage shows the same behavior.

Splitting the SpikeSourceArray across neurons cores does not change the results. (Ie still 7s and then 6s)
Changing SpikeSourceArray to 1 neuron and using an AllToAll does not change the results.
(ie still 7s and then 6s)

--
Is this desirable?????

This is because of the way the "random" backoff is assigned (abstract_population_vertex.py lines 380-386). I believe this was done to spread the load out on the first timestep so that each core on a chip was not trying to access the same information at the same time, but it seems as though this leads to the consequences shown in the results from this script, amongst others. Another issue I've discovered while playing with this is that you can run the same script twice and get different answers (try the above script with 1 neuron per core rather than 2, for example).

I think part of the problem might be using 0 as the first backoff value for each population. Editing the code mentioned above so that n_data_specs is incremented before the value is written appears to help a bit, but not in all cases.

It is also true that cases where we run with very small numbers of neurons per core are rare, and that whether in a large population the fact that some of the neurons spike one timestep too late or one timestep too early (depending on your viewpoint) might not matter too much: who looks at data on an individual neuron level in a model with multiple populations and tens of thousands of neurons in each population... ?

@rowleya any further ideas on this?

"who looks at data on an individual neuron level in a model with multiple populations and tens of thousands of neurons in each population... ?"

i introduce ya to elephant and the column. which the first paper we wrote did exactly that.

Unless an individual neuron was set to do something special, that seems a bit... crazy.

I think this is a known issue (see SpiNNakerManchester/sPyNNaker#619).

ya thnk its crazy. but from the neuroscience view point. it was comparing the spike trains from the nest run and our run to show that they were equivalent. And they were able to detect when we dropped 1 packet. They even went and engineered the software elephant to do this..... so id be more inclined to say this is something we're going to face more and more.....

and here's the link to the software in question
http://neuralensemble.org/elephant/

I am guessing that the issue with a missing spike was an odd one. I understand that Oliver’s simulation still throws away a few spikes and gives results similar to the microcircuit paper where we dropped none (and I believe they use Elephant for both).

who knows what the 1 packet we dropped 2 or mores years ago was. lol.

closing as duplicate