Instructions for Psychophysicists
making experiments using motionclouds¶
making psychophysics using psychopy¶
installation (preparation 3min, cooking / downloading 10 min)¶
- it is easy to create a simple psychophysical experiment using a combination of PsychoPy and MotionClouds. Simply download the standalone psychopy binary for your computer and try to load the example script psychopy_competing.py.
- for this, you may:
- clone / download the MC repository on github.
- download PsychoPy
- then, open the psychopy_competing.py file in the editor (the so-called "PsychoPy Coder")
running¶
- Hit the run button (the green round button with a runner on it), it will then:
- tell you about some configuration parameters (be sure to enter the correct size of your screen)
- pre-generate data
- run the experiment, that is present a sequence of trials consisting of :
- a movie presentation,
- a blank screen where the program waits for a key press: UP arrow if you saw the movie going up, DOWN if down or Q / Esc to exit
results¶
- the results are displayed at the end showing that :
- the movies consisted of 2 MotionClouds in opposite directions (UP and DOWN) but added with different contrasts,
- the percept was going up or down (on the y-axis) as a function of this contrast (x-axis)
- it seems I have a slight bias for UP movements as the contrast of equal probability seems inferior to 1/2 (a proper analysis would necessitate a fit).
TODO: Frequently asked questions¶
Experimental protocol¶
These instructions were given on VSG system but should work on similar working platforms:
-
Measure your experimental settings (screen size, frame rate, resolution, viewing distance) as these values will constraint your spatial and temporal sampling frequencies.
-
Calibrate your screen (gamma / luminance).
-
Experimental constants (with some default values):
a. $D$ : Viewing distance distance = 570 mm
b. $S$: screen size = 390 mm
c. $dt$ : Max frame period 0,01 s
d. $\text{VA}$ : Stimulus Width in degrees of visual angle at viewing distance $D$: 38,0958
e. $f_{d}$ : Desired spatial frequency in cycles per degree - cpd
f. $V_{drift}$: Drifting velocity in degrees per second - d/s
g. $X$, $Y$ : sampling step in degrees.
h. $F_X$, $F_Y$: spatial sampling rates in cpd
i. $T$ : Stimulus duration in seconds.
j. $\alpha$: mean orientation of the frequency component (called $\theta$ in the scripts)
k. $\theta$: direction of image motion.
-
Parameters for the stimulus:
a. ($N_X$, $N_Y$) : Stimuli’s frame size in pixels as generated in Fourier space.
b. ($n_X$, $n_Y$): Screen resolution = 640 x 480 px (approx 0.06 deg/px)
c. ($N_frame$): Number of movie frames as generated in Fourier space.
d. $f_pref$ : Central frequency normalized in Fourier space
e. $V_X$: left - rightward motion
f. $V_Y$: up-downward motion
g. $W_X$, $W_Y$: spatial sampling frequencies in Fourier space
How do I compute a visual angle?¶
$$VA = 2*arctan\left( \frac{S}{2D} \right)$$where $S$ is stimulus size on the screen (X or Y) and $D$ is the viewing distance.
How do I set the spatial sampling?¶
$X = \frac{\text{VA}}{n_{x}}$ is the spatial step in the x-axis and for the y-axis the sampling step, Y, has the same value the x-axis.
In practice, the spatial sampling step (X and/or Y) can not be larger than the semi-period of the finest spatial frequency ($F_X$ and or $F_Y$) that is represented in the image. The image will not contain those frequency components are higher than the Nyquist frequency along the two axes.
What is the unit for frequencies?¶
The frequency distribution is periodic with period $F_S$. The normalized frequency ranges from 0.0 to 1.0, which corresponds to a real frequency range of 0 to the sampling frequency $F_S$.
The normalized frequency also wraps around 1.0 so a normalized frequency of 1.1 is equivalent to 0.1.
When the actual frequency, $f_t$, has units of Hz, for example, the normalized frequencies, also denoted by $f_t$, have units of cycles per sample, and the periodicity of the normalized distribution is 1.
What is the maximal frequency ?¶
If a sampled signal is real-valued the periodicity of the frequency distribution is still $F_S$. But due to symmetry, it is completely defined by the content within a span of just $F_S/2$ (Nyquist frequency). Accordingly, some applications use that as the normalization reference (and the resulting units are half-cycles per sample).
Normalization produces a distribution that is independent of the sample-rate, and thus one plot is sufficient for all possible sample-rates.
fx, fy, ft = np.mgrid[(-N\_X//2):((N\_X-1)//2 + 1),
(-N\_Y//2):((N\_Y-1)//2 + 1),(-N\_frame//2):((N\_frame-1)//2 + 1)]
fx, fy, ft = fx\*1./N\_X, fy\*1./N\_Y, ft\*1./N\_frame
The range is always $(-0.5, 0.5)$ which corresponds to the maximum frequency we can represent.
Example: N_X = N_Y = 512 and N_frames = 128 then,
Nyquist limit is $2^{- 1}W\ cyc/width$, W is the width of the stimulus.
The highest frequency band considered will be centered at $2^{- 2}W\ cyc/width$
The blobs change spatial frequency in octave steps, the scale-k band will have a spatial frequency pass band centered at: $2^{- (k + 2)}W\ cyc/width$
easy: it's just a design choice, we could have used B_tf, but defining B_sf then slicing in the envelope with a B_V envelope was more intuitive.
Convention used in DFT¶
From numpy.fft.ifftn(a, s=None, axes=None)¶
From http://docs.scipy.org/doc/numpy/reference/routines.fft.html:
The values in the result follow so-called “standard” order: If A = fft(a,n), then A[0] contains the zero-frequency term (the mean of the signal), which is always purely real for real inputs. Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains the negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency (+/-0.5), and is also purely real for real input.
For an odd number of input points, A[(n-1)/2] contains the largest positive frequency, while A[(n+1)/2] contains the largest negative frequency. The routine np.fft.fftfreq(A) returns an array giving the frequencies of corresponding elements in the output.
The routine np.fft.fftshift(A) shifts transforms and their frequencies to put the zero-frequency components in the middle, and np.fft.ifftshift(A) undoes that shift. (...)
When the input is purely real, its transform is Hermitian, i.e., the component at frequency is the complex conjugate of the component at frequency , which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of rfft functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, n input points produce n/2+1 complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of n points uses n/2+1 input points.
Theoretical background: Nyquist–Shannon sampling theorem¶
Our function is band limited to [${- F}_{B},F_{B}{}_{}$] where B stands for bandwidth.
The Nyquist sampling theorem provides a prescription for the nominal sampling interval required to avoid aliasing and to perfectly reconstruct the signal:
The sampling frequency should be at least twice the highest frequency contained in the signal.
$F_{S} \geq {2F}_{B}$
$F_{N} = \ \frac{F_{S}}{2}$ is called Nyquist frequency .
Why should we use polar coordinates for frequencies?¶
For the $f_x$ and $f_y$ axes units are cycles/px and for ft axis is
Example: $f_x= 0,25 cpd$ (vertical grating)
$f_{\text{s\ }} =$\
fd is the magnitude of the spatial frequency in polar coordinates. The direction \theta is $$ \theta=\arctan (fx/fy) $$ (w.r.t cartesian coordinates) It can be seen as the rotation of the gaussian envelope.
$f_{x_{\text{norm}}\ } = \ f_{s}\text{cos\ }\theta$
$f_{y_{\text{norm}}\ } = \ f_{s}\text{sin\ }\theta$
${f_{\text{spatial}{}_{cyc/pix}} = {(f}_{\text{pre}f_{\text{cpd}}}*\ VA)/N_{x}}_{}$
$V_{X} = \ \frac{V_{\text{drift}}*T}{N_{f}}$
$ft\ cyc/frame = fspatial\ /Vx$
It’s not a deformation, projections (marginalization I would say) of gaussians are still gaussians
B_tf = B_sf*V
$V_{\max} = \ \frac{VA*T}{n_{f}}$
Vx=(Vdrift * T) / VAx
VAx or N_X for horizontal dispalcement
Vmax= Width /Length = (deg/sec)
Ex: vmax= 30deg/64frames*0,02sec/frame = 23deg/sec
Vx= 16 deg/sec
Vx_norm = 0,68
f_t = Vx_norm * f_s norm (case of drifting grating, no rotation in x-y axes)
f_t = 0,68 * 0,03152 = 0,02
Log-Gabor filter construction process in the Fourier domain (Hanssen and Hess, 2006, Journal of Vision)
an example physiology experiment using VSDI¶
%%writefile ../files/experiment_VSDI.py
#!/usr/bin/env python
"""
experiment_VSDI.py
Experiment designed for optical imaging showing a conversion of size from Fourier
to screen coordinates and an export to a zipped file conaining BMP files (as the
video card has limited memory)
(c) Paula Sanz Leon - INT/CNRS
"""
import os
import scipy
import MotionClouds as mc
import numpy as np
# uncomment to preview movies
# vext, display = None, True
#------------------------- Zipped grating ------------------------------- #
name = 'zipped_grating'
#initialize -
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
sf_0 = 0.2
B_sf = sf_0 / 4.
alpha = 1.0
V_X = 0.5
B_V = V_X / 10.
name_ = mc.figpath + name
for seed in range(424242, 424242+8):
name_ = mc.figpath + name + '-seed-' + str(seed)
mc.figures_MC(fx, fy, ft, name_, B_V=B_V, sf_0=sf_0, B_sf=B_sf, V_X=V_X, theta=np.pi/4., alpha=alpha, seed=seed, vext='.zip')
#-------------------- Narrowband vs Braodband experiment ---------------- #
vext = '.mpg'
#vext = '.mat'
#vext = '.zip'
#display = False
# Initialize frequency cube
N_X = 640.
N_Y = 480.
N_frame = 30. # a full period in time frames
fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)
# Experimental constants
contrast = 0.5
seeds = 1
VA = 38.0958 # VSG constants for a viewing ditance 570 mm.
framerate = 50. # Refreshing rate in [Hz]
T = 0.6 # Stimulus duration [s]
f_d = 0.5 # Desired spatial frequency [cpd]
# Clouds parameters
B_V = 0.2 # BW temporal frequency (speed plane thickness)
B_sf = 0.15 # BW spatial frequency
theta = 0.0 # Central orientation
B_theta = np.pi/12 # BW orientation
verbose = False
alpha = 1.0
# Get normalised units
sf_0=0.1
V_X=0.5
def physicalUnits2discreteUnits(sf0_cpd, Bsf_cpd, v_dps, B_dps,pixelPitch,viewingDistance,frameRate):
"""% % laptop monitor
% pixelPitch = 0.22/10; % in cm
% viewingDistance = 50;
% sf0_cpd = 4 ;
% Bsf_cpd = 1 ;
% v_dps = 4 ;
% B_dps = 1 ;
% frameRate = 20 ;
% convert to machine units
"""
cmPerDegree = 2*viewingDistance*tand(1/2)
pxPerDegree = cmPerDegree/pixelPitch
sf0 = sf0_cpd/pxPerDegree
Bsf = Bsf_cpd/pxPerDegree
v = v_dps/frameRate*pxPerDegree
Bv = B_dps/frameRate*pxPerDegree
return sf0, Bsf, v, Bv
# take monitor parameters
# planar display
monitorRefresh = 60 ;
pixelPitch = 0.2865/10 ;% pixelsize in cm
# experiments parameter
viewingDistance = 70 ;
frameRefresh = monitorRefresh/3 ;
stimModRate = 1 ;
numberOfReps = 12 ;
framesPerMod = frameRefresh/stimModRate/2 ;
# Masks
# gaussian mask
sigma_mask_x = 0.15
sigma_mask_y = 0.2
x, y, t = mc.get_grids(N_X, N_Y, N_frame)
n_x, n_y = N_X, N_Y
gauss = np.exp(-(((x-172./n_x)**2/(2*sigma_mask_x**2)) + (((y-108./n_y)**2)/(2*sigma_mask_y**2))))
def tukey(n, r=0.5):
'''The Tukey window, also known as the tapered cosine window, can be regarded as a
cosine lobe of width r * N / 2 that is convolved with a rectangle window of width
(1 - r / 2). At r = 1 it becomes rectangular, and at r = 0 it becomes a Hann window.
http://www.mathworks.com/access/helpdesk/help/toolbox/signal/tukeywin.html
'''
# Special cases
if r <= 0:
return np.ones(n.shape) #rectangular window
elif r >= 1:
return np.hanning(n.shape)
# Normal case
x = np.linspace(0, 1, n)
w = np.ones(x.shape)
# first condition 0 <= x < r/2
first_condition = x<r/2
w[first_condition] = 0.5 * (1 + np.cos(2*np.pi/r * (x[first_condition] - r/2) ))
# second condition already taken care of
# third condition 1 - r / 2 <= x <= 1
third_condition = x>=(1 - r/2)
w[third_condition] = 0.5 * (1 + np.cos(2*np.pi/r * (x[third_condition] - 1 + r/2)))
return w
# Tukey mask - fading effect
tw_x = tukey(n=n_x, r=0.15)
tw_y = tukey(n=n_y, r=0.15)
w = np.tile(((np.outer(tw_y,tw_x))), (N_frame,1,1))
tukey_mask = w.T
# Get Random Clouds
name_ = mc.figpath + name
for seed in [123456 + step for step in range(seeds)]:
name__ = mc.figpath + name + '-seed-' + str(seed) + '-sf0-' + str(sf_0).replace('.', '_') + '-V_X-' + str(V_X).replace('.', '_')
# broadband
z = mc.envelope_gabor(fx, fy, ft, name_, B_sf=Bsf, sf_0=sf_0, theta=theta, B_V=B_V, B_theta = B_theta, alpha=alpha)
movie = mc.figures(z, name=None, vext=vext, seed=seed, masking=True)
for label, mask in zip(['_mask', '_tukey_mask'], [gauss, tukey_mask]):
name_ = name__ + '-cloud-' + label
if anim_exist(name_):
movie = mc.rectif(movie*mask)
mc.anim_save(movie, name_, display=False, vext=vext)
# narrowband
z = mc.envelope(fx, fy, ft, name_, B_sf=B_sf/10., sf_0=sf_0, theta=theta, B_V=B_V, B_theta=B_theta, alpha=alpha)
movie = mc.figures(z, name=None, vext=vext, seed=seed, masking=True)
for label, mask in zip(['_mask', 'tukey_mask'], [gauss, tukey_mask]):
name_ = name__ + '-blob-' + label
if anim_exist(name_):
movie = mc.rectif(movie*mask)
mc.anim_save(movie, name_, display=False, vext=vext)