Building networks: neurons¶
Cell types¶
In PyNN, the system of equations that defines a neuronal model is encapsulated
in a CellType
class. PyNN provides a library of “standard” cell types
(see Standard models) which work the same across all backend simulators -
an example is the IF_cond_exp
model - an integrate-and-fire (I&F)
neuron with conductance-based, exponential-decay synapses.
For any given simulator, it is also possible to wrap a native model - a NEST or
NEURON model, for example - in a CellType
class so that it works in
PyNN (see documentation for individual Backends).
It should be noted that these “cell types” are mathematical cell types. Two or more different biological cell types may be represented by the same mathematical cell type, but with different parameterizations. For example, in the thalamocortical model of Destexhe (2009), thalamocortical relay neurons and cortical neurons are both modelled with the adaptive exponential I&F neuron model (AdExp):
>>> refractory_period = RandomDistribution('uniform', [2.0, 3.0], rng=NumpyRNG(seed=4242))
>>> ctx_parameters = {
... 'cm': 0.25, 'tau_m': 20.0, 'v_rest': -60, 'v_thresh': -50, 'tau_refrac': refractory_period,
... 'v_reset': -60, 'v_spike': -50.0, 'a': 1.0, 'b': 0.005, 'tau_w': 600, 'delta_T': 2.5,
... 'tau_syn_E': 5.0, 'e_rev_E': 0.0, 'tau_syn_I': 10.0, 'e_rev_I': -80 }
>>> tc_parameters = ctx_parameters.copy()
>>> tc_parameters.update({'a': 20.0, 'b': 0.0})
>>> thalamocortical_type = EIF_cond_exp_isfa_ista(**tc_parameters)
>>> cortical_type = EIF_cond_exp_isfa_ista(**ctx_parameters)
(see Model parameters and initial values for more on specifying parameter values). To see the list
of parameter names for a given cell type, use the get_parameter_names()
method:
>>> IF_cond_exp.get_parameter_names()
['tau_refrac', 'cm', 'tau_syn_E', 'v_rest', 'tau_syn_I', 'tau_m', 'e_rev_E', 'i_offset', 'e_rev_I', 'v_thresh', 'v_reset']
while the default values for the parameters are in the default_parameters
attribute:
>>> print(IF_cond_exp.default_parameters)
{'tau_refrac': 0.1, 'cm': 1.0, 'tau_syn_E': 5.0, 'v_rest': -65.0, 'tau_syn_I': 5.0, 'tau_m': 20.0, 'e_rev_E': 0.0, 'i_offset': 0.0, 'e_rev_I': -70.0, 'v_thresh': -50.0, 'v_reset': -65.0}
Note that what we have created here are neuron type objects. These can be regarded as templates, from which we will construct the actual neurons in our network.
Populations¶
Since PyNN is designed for modelling networks containing many neurons,
the default level of abstraction in PyNN is not the single neuron but a
population of neurons of a given type, represented by the Population
class:
>>> tc_cells = Population(100, thalamocortical_type)
>>> ctx_cells = Population(500, cortical_type)
To create a Population
, we need to specify at minimum the number of
neurons and the cell type. Three additional arguments may optionally be
specified:
the spatial structure of the population;
initial values for the neuron state variables;
a label.
>>> from pyNN.space import Grid2D, RandomStructure, Sphere
>>> tc_cells = Population(100, thalamocortical_type,
... structure=RandomStructure(boundary=Sphere(radius=200.0)),
... initial_values={'v': -70.0},
... label="Thalamocortical neurons")
>>> from pyNN.random import RandomDistribution
>>> v_init = RandomDistribution('uniform', (-70.0, -60.0))
>>> ctx_cells = Population(500, cortical_type,
... structure=Grid2D(dx=10.0, dy=10.0),
... initial_values={'v': v_init},
... label="Cortical neurons")
(see Representing spatial structure and calculating distances for more detail on spatial structure and Model parameters and initial values for more on specifying initial values.)
For backwards compatibility and for ease of transitioning from other simulator
languages, the create()
function is available as an alias for
Population
. The following two lines are equivalent:
>>> cells = create(my_cell_type, n=100)
>>> cells = Population(100, my_cell_type)
(Note the different argument order).
Views¶
It is common to work with only a subset of the neurons in a Population
- to modify their parameters, make connections or record from them. Any subset
of neurons in a population may be addressed using the usual Python indexing
and slicing notation, for example:
>>> id = ctx_cells[47] # the 48th neuron in a Population
>>> view = ctx_cells[:80] # the first eighty neurons
>>> view = ctx_cells[::2] # every second neuron
>>> view = ctx_cells[45, 91, 7] # a specific set of neurons
It is also possible to address a random sample of neurons within a population
using the sample()
method:
>>> view = ctx_cells.sample(50, rng=NumpyRNG(seed=6538)) # select 50 neurons at random
In the first of these examples, the object that is returned is an ID
object, representing a single neuron. ID
objects are discussed below.
In all of these examples except the first, the object that is returned is a
PopulationView
. A PopulationView
holds references to a subset of
neurons in a Population
, which means that any changes in the view are also
reflected in the real population (and vice versa).
PopulationView
objects behave in most ways as real Population
objects; notably, they can be used in a Projection
(see
Building networks: connections) and combined with other Population
or
PopulationView
objects to create an Assembly
.
The parent
attribute of a PopulationView
has a reference to the
Population
that is being viewed, and the mask
attribute
contains the indices of the neurons that are in the view.
>>> view.parent.label
'Cortical neurons'
>>> view.mask
array([150, 181, 53, 149, 496, 499, 240, 444, 13, 100, 28, 19, 101,
122, 143, 486, 467, 492, 406, 90, 136, 173, 8, 341, 5, 348,
188, 63, 129, 416, 307, 298, 60, 180, 382, 47, 484, 370, 223,
147, 72, 32, 261, 193, 249, 212, 58, 87, 86, 456])
Assemblies¶
As discussed above, a Population
is a homogeneous collection of
neurons, in the sense that all neurons have the same cell type. An
Assembly
is an aggregate of Population
and
PopulationView
objects, and as such can represent a heterogeneous
collection of neurons, of multiple cell types.
An Assembly
can be created by adding together Population
and
PopulationView
objects:
>>> all_cells = tc_cells + ctx_cells
>>> cells_for_plotting = tc_cells[:10] + ctx_cells[:50]
or by using the Assembly
constructor:
>>> all_cells = Assembly(tc_cells, ctx_cells)
An assembly behaves in most ways like a Population
, e.g. for setting
and retrieving parameters, specifying which neurons to record from, etc. It can
also be specified as the source or target of a Projection
. In this
case, all the neurons in the component populations are treated as identical for
the purposes of the connection algorithm (note that if the post-synaptic receptor type is
specified (with the receptor_type
argument), an Exception will be raised if not
all component neuron types possess that receptor type).
Individual populations within an Assembly
may be accessed via their
labels, e.g.:
>>> all_cells.get_population("Thalamocortical neurons")
Population(100, EIF_cond_exp_isfa_ista(<parameters>), structure=RandomStructure(origin=(0.0, 0.0, 0.0), boundary=Sphere(radius=200.0), rng=NumpyRNG(seed=None)), label='Thalamocortical neurons')
Iterating over an assembly returns individual IDs, ordered by population.
Similarly, the size
attribute of an Assembly
gives the total
number of neurons it contains. To iterate over or count populations, use the
populations
attribute:
>>> for p in all_cells.populations:
... print("%-23s %4d %s" % (p.label, p.size, p.celltype.__class__.__name__))
Thalamocortical neurons 100 EIF_cond_exp_isfa_ista
Cortical neurons 500 EIF_cond_exp_isfa_ista
Inspecting and modifying parameter values and initial conditions¶
Although both parameter values and initial conditions may be specified when
creating a Population
(and this is generally the most efficient place
to do it), it is also possible to modify them later.
The get()
method of Population
, PopulationView
and
Assembly
returns the current value(s) of one or more parameters:
>>> ctx_cells.get('tau_m')
20.0
>>> all_cells[0:10].get('v_reset')
-60.0
If the parameter is homogeneous across the group, a single number will be
returned, otherwise get()
will return a NumPy array containing the
parameter values for all neurons:
>>> ctx_cells.get('tau_refrac')
array([ 2.64655001, 2.15914942, 2.53500179, ...
It is also possible to ask for multiple parameters at once, in which case a list of values in the same order as the list of parameter names will be returned.
>>> ctx_cells.get(['tau_m', 'cm'])
[20.0, 0.25]
When running a distributed simulation using MPI, get()
will by default
return values for only those neurons that exist on the current MPI node. To get
the values for all neurons, use get(parameter_name, gather=True)
.
To modify parameter values, use the set()
method. To set the same
value for all neurons, pass a single number as the parameter value:
>>> ctx_cells.set(a=2.0, b=0.2)
To set different values for different neurons there are several options - see Model parameters and initial values for more details.
To modify the initial values of model variables, use the initialize()
method:
>>> ctx_cells.initialize(v=RandomDistribution('normal', (-65.0, 2.0)),
... w=0.0)
The default initial values may be inspected as follows:
>>> ctx_cells.celltype.default_initial_values
{'gsyn_exc': 0.0, 'gsyn_inh': 0.0, 'w': 0.0, 'v': -70.6}
Injecting current into neurons¶
Static or time-varying currents may be injected into neurons using either the
inject_into()
method of the CurrentSource
:
>>> pulse = DCSource(amplitude=0.5, start=20.0, stop=80.0)
>>> pulse.inject_into(tc_cells)
or the inject()
method of the Population
,
PopulationView
or Assembly
:
>>> import numpy
>>> times = numpy.arange(0.0, 100.0, 1.0)
>>> amplitudes = 0.1*numpy.sin(times*numpy.pi/100.0)
>>> sine_wave = StepCurrentSource(times=times, amplitudes=amplitudes)
>>> ctx_cells[80:90].inject(sine_wave)
See Injecting current for more about injecting currents.
Recording variables and retrieving recorded data¶
Just as each cell type has a well-defined set of parameters (whose values are
constant over time), so it has a well-defined set of state variables, such as
the membrane potential, whose values change over the course of a simulation. The
recordable
attribute of a CellType
class contains a list of
these variables, as well as the ‘spikes’ variable, which is used to record
the times of action potentials:
>>> ctx_cells.celltype.recordable
['spikes', 'v', 'w', 'gsyn_exc', 'gsyn_inh']
The record()
method specifies which variables should be recorded:
>>> all_cells.record('spikes')
>>> ctx_cells.sample(10).record(('v', 'w')) #, sampling_interval=0.2)
Note that the sampling interval must be an integer multiple of the simulation time step (except for simulators which allow use of variable time-step integration methods).
Todo
discuss specifying filename in record()
At the end of a simulation, the recorded data can be retrieved using the
get_data()
method:
>>> t = run(0.2)
>>> data_block = all_cells.get_data()
or written to file using write_data()
:
>>> from neo.io import NeoHdf5IO
>>> h5file = NeoHdf5IO("my_data.h5")
>>> ctx_cells.write_data(h5file)
>>> h5file.close()
get_data()
returns a Neo Block
object. For more information on
Neo see the documentation at http://packages.python.org/neo. Here, it will
suffice to note that a Block
is the top-level container, and contains
one or more Segments
. Each Segment
is a container for data
that share a common time basis, and can contain lists of AnalogSignal
and SpikeTrain
objects. These data objects
inherit from NumPy array, and so can be treated in further processing (analysis,
visualization, etc.) in exactly the same way as plain arrays, but in addition
they carry metadata about units, sampling interval, etc.
write_data()
also makes use of Neo, and allows writing to any of the
several output file formats supported by Neo. Note that as a short-cut, you can
just give a filename to write_data()
; the output format will then be
determined based on the filename extension (‘.h5’ for HDF5, ‘.txt’ for ASCII,
etc.) if possible, otherwise the default file format (determined by the value of
pyNN.recording.DEFAULT_FILE_FORMAT
) will be used.
For more details, see Data handling.
Working with individual neurons¶
Although it is usually more convenient and more efficient to work with
populations of neurons, it is occasionally convienient to work with individual
neurons, represented as an ID
object:
>>> tc_cells[47]
709
For the simulator backends shipped with PyNN, the ID
class is a
subclass of int
, and in the case of NEURON and NEST matches the global
ID (gid) used internally by the simulator. There is no requirement that IDs be
integers, however, nor that they have the same value across different
simulators.
The parent
attribute contains a reference to the parent
population
:
>>> a_cell = tc_cells[47]
>>> a_cell.parent.label
'Thalamocortical neurons'
To recover the index of a neuron within its parent given the ID
, use
Population.id_to_index()
, e.g.:
>>> tc_cells.id_to_index(a_cell)
47
The ID
object allows direct access to the parameters of individual neurons, e.g.:
>>> a_cell.tau_m
20.0
To change several parameters at once for a single neuron, use the
set_parameters()
method:
>>> a_cell.set_parameters(tau_m=10.0, cm=0.5)
>>> a_cell.tau_m
10.0
>>> a_cell.cm
0.5