The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
pmwg is a package that implements an efficient and flexible Particle Metropolis within Gibbs sampler as outlined in the paper (Gunawan et al. 2020). The sampler estimates group level parameter values, the covariance matrix related parameter estimates to each other and the individual level parameter estimates (random effects) in a two step process. The process consists of a Gibbs sampling step for group level/covariate estimates followed by a particle metropolis step for random effects for each iteration of the sampler.
The sampling proceeds through three stages, burn in, adaptation and a final sampling stage. Burn in can be relatively short and should move the sample estimates from the start values to a plausible region of the parameter space relatively quickly. The adaptation stage is the same as burn in, however samples from this stage are used to generate a proposal distribution used to create samples more efficiently for the final sampling stage.
First we will load the pmwg package and explore the included dataset
## subject condition stim resp rt
## 1115 1 1 2 2 0.4319
## 1116 1 3 2 2 0.5015
## 1117 1 3 1 1 0.3104
## 1118 1 1 2 2 0.4809
## 1119 1 1 1 1 0.3415
## 1120 1 2 1 1 0.3465
The forstmann dataset is from (Forstmann et al. 2008) and is the cleaned data from an experiment that displayed a random dot motion task with one of three speed accuracy manipulations (respond accurately instructions, neutral instructions and respond quickly instructions).
We can see that there are 5 columns:
subject
- which gives an ID for each of the 19 participantscondition
- the speed accuracy instructions for the trialstim
- whether dots were moving left or rightresp
- whether the participant responded left or rightrt
- the response time for the trialWe will use the rtdists
package to look at threshold differences between the three conditions to see whether there is evidence for different thresholds (the evidence required to trigger a response) in each of the three conditions. For a full explanation of the model please see the full documentation in Chapter 3.
Now that we have the data we will need to specify the log-likelihood function.
lba_loglike <- function(x, data) {
x <- exp(x)
if (any(data$rt < x["t0"])) {
return(-1e10)
}
# This is faster than "paste".
bs <- x["A"] + x[c("b1", "b2", "b3")][data$condition]
out <- rtdists::dLBA(
rt = data$rt, # nolint
response = data$stim,
A = x["A"],
b = bs,
t0 = x["t0"],
mean_v = x[c("v1", "v2")],
sd_v = c(1, 1),
distribution = "norm",
silent = TRUE
)
bad <- (out < 1e-10) | (!is.finite(out))
out[bad] <- 1e-10
out <- sum(log(out))
out
}
This function contains the primary implementation of our model for the data. The pmwgs
object that we run later in this example will pass two things:
x
that we will be assessing.data
), the data for one subject.Our job then in the function is to take the parameter values and calculate the log-likelihood of the data given the parameter values.
The overall process is as follows:
dLBA
function to calculate the likelihood of the data given our parameter estimates.There are a couple of other necessary components to running the sampler, a list of parameters and a prior for the group level parameter values.
In this case the parameters consist of threshold parameters for each of the three conditions (b1
, b2
and b3
), a upper limit on the range for accumulator start points (A
), the drift rates for evidence accumulation for the two stimulus types (v1
and v2
) and non-decision time (t0
).
The prior for this model is uninformed and is just 0 for the mean of the group parameters and standard deviation of 1 for each parameter.
Once we have these components we are ready to create and run the sampler.
# Create the Particle Metropolis within Gibbs sampler object
sampler <- pmwgs(
data = forstmann,
pars = pars,
ll_func = lba_loglike,
prior = priors
)
Creation of the sampler object is done through a call to the pmwgs
function, which creates the object and sets numerous precomputed values based on the number of parameters and other aspects of the included function arguments. Next steps are to initialise the sampler with start values for the random effects and then run the three stages of sampling.
The stages can be long running, but once complete you will have
# Initialise (generate first random effects for sampler)
sampler <- init(sampler) # Can also pass start points for sampler
# Run each stage of the sampler, can adjust number of particles on each
sampler <- run_stage(sampler, stage = "burn")
sampler <- run_stage(sampler, stage = "adapt")
sampler <- run_stage(sampler, stage = "sample")
The run_stage
command can also be passed other arguments such as iter
for number of iterations, particles
for number of particles among others and more. For a full list see the description in the PMwG Tutorial Book.
Forstmann, Birte U., Gilles Dutilh, Scott Brown, Jane Neumann, D. Yves von Cramon, K. Richard Ridderinkhof, and Eric-Jan Wagenmakers. 2008. “Striatum and Pre-Sma Facilitate Decision-Making Under Time Pressure.” Proceedings of the National Academy of Sciences 105 (45): 17538–42. https://doi.org/10.1073/pnas.0805903105.
Gunawan, D., G.E. Hawkins, M.-N. Tran, R. Kohn, and S.D. Brown. 2020. “New Estimation Approaches for the Hierarchical Linear Ballistic Accumulator Model.” Journal of Mathematical Psychology 96: 102368. https://doi.org/https://doi.org/10.1016/j.jmp.2020.102368.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.