The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
One key idea behind the development of the multifear
package is that the user does as little as possible before getting a
result. This, however, means that the user has to inspect the code in
order to see what exactly is going on, a common problem when trying to
create a user friendly software. As we realize that not all users are
familiar with R, or simply do not have time to go through thousands
lines of code, here we present some of the key points about what is
going on when the code is run, separately for each function. Similar
information is provided in the corresponding help files but here
everything is at one place. The functions are placed in order to how
they are called – i.e., from data preparation, to the different tests,
to wrapper functions – and not alphabetically.
This function is used internally to separate the conditioned responses – into blocks of trials. This means that although the user will provided the conditioned responses for each trial separately (e.g., trial 1, trial 2, … trial X), the function will separate the trials according to how the conditioned responses are separated in the literature. Specifically, we have the following columns:
“id” = id column
“top10per” = mean of top 10
“bottom10per” = mean of bottom 10
“minfirst = all trials minus the first one
“all” = mean of all trials
“t33per = mean of top 33
“m33per” = mean of medium 33
“b33per” = mean of bottom 33
“fhalf” = mean of first half of the trials
“lhalf” = mean of last half of the trials
“ftrial” = first trial
“ltrial” = last trial
“t20per” = mean of top 20
“b20per” = mean of bottom 20
“f2trial” = mean of first two trials
“l2trial” = mean of last two trials
“per2_X” = mean per two trials (i.e., block), with X referring to the number of block.
One question is what happens when separating the trials in ‘non-perfect’ blocks (e.g., when we separate trials in 2 blocks and there are 9 trials). In these cases, the last block has the extra trial.
This is just a wrapper function for running the chop_cs
two times – once for cs1 and once for cs2.
This is just a small function computing the row means of a data frame, where in our case the data frame includes the conditioned responses.
At this point this function is not used extensively, or has any effect when it is used, in the package but was created for future use where exclusion criteria are implicated in the package.
This function is used for running Bayesian t-tests.The workhorse of
this function is the , in the BayesFactor
package. The
reader is advice to also look at that function apart from .
Specifically, the function begins by ‘deciding’ whether to run an
independent samples or paired t-test. This is done based on whether the
group
parameter is set to NULL
, where the
function will run a paired-samples t-test, or has a value, in which case
the function will run a paired samples t-tests.
Then, the function will run separate 3 t-tests, one for each prior
for the null interval: a) from -Inf to +Inf, b) from -Inf to 0, and c)
from 0 to -Inf. For more information on that check the
nullIInterval
argument in the function. The
rscale
, this is the scale factor for the Cauchy
distribution, is set to 0.707 – see the help function for
alternatives.
After running the tests, the function just selects the 2 t-tests that
are most relevant for the particular phase; the options a
and c
for the acquisition phase and a
and
b
for the extinction phase.
The logic followed is largely the same as in the
bt_test_mf()
function, but this time we do not consider any
priors or scale factors. For the t.test we used the function The effect
size is computed using the function. {lease note that if we let R decide
which test to run – this is done by default in stats::t.test, then for
some test there would be a Student t-test whereas in some others not.
There are two different ways to compute the meta-analytic effect sizes
but the results may differ. The option “t_to_eta2” computes the eta
squared via the t values whereas the “d_to_eta2” the eta squared is
computed via the Cohen’s d value.
The main function of the repeated measures ANOVA. The function runs
full ANOVAs using all possible factor. This means that if the
group
argument is defined the group factor will be taken
into account. If the time
argument is defined then the time
is included in the ANOVA. If no one of these argument is defined, then
the only factor that is run is the cs factor with two levels (cs1
vs. cs2).
The interesting thing is the meta-analytic effect size. This is the omega squared effect size and its confidence intervals. This is what the package does at the moment but for future versions, where there will be much more flexibility in the factors, this effect size may change.
This function starts as the rm_anova_mf()
function by
determining thee different factors of the ANOVA. Then, it just runs a
simple Bayesian ANOVA with the package.
This is the most complicated function of the whole package. The goal of the function is to run mixed effects models for the multiverse analysis. The function runs different mixed effects models and does model comparisons. The function starts with the preparation of the data. Then, as also mentioned in the helps files, the function performs by default two dependent variable standardizations, the one per subject and the other one without taking subject into account. In case time is included, the function computes the intercept – i.e., the 0 point – on the middle of the time sequence.
The following models are then run and compared: a) Intercept only
model, b) Intercept plus CS model, and c) Intercept plus CS Time
interaction. Also, Separate models are run with Subject
as
random factor, as well as Subject and Time
as random
factors. Each model is fit by maximizing the log-likelihood (i.e., “ML”
term in nlme::lme), and the model comparison is done using
BIC
.
This is the main function for running the multiverse. It is actually
just a wrapper of all the different functions described above. How it
works: the function starts by determining if there are any groups that
need to be accounted for (this is based on the group
argument in thee function). According to it, then, decides what type of
ANOVAs, t-tests, and mixed models should be run – within a frequentists
and Bayesian way. As such, it is just a wrapper of all the functions for
running all tests.
Importantly, for the ANOVAs it will run all interactions possible and will report the effects of the highest order interaction – if there are interactions in the data in the first place.
This function just runs multiple instances of the
universe_cs
, but this time using different data sets, where
you have different exclusion criteria, resulting in different data sets.
Think of it as a loop across the different data sets created by the
chop_cs() function.
Just a forest plot where we have all effect sizes and their
confidence intervals. The effect sizes are calculated with the
universe_mf
or multiverse_mf
function.
This is a simple function that tries to summarize the findings in terms of proportions and averages of p-values/Bayes factor that are above/beyond a specific level. This is done based on simple maths – proportions and averages.
This function just plots the results of the
inference_cs()
function.
This is just an example data set that you can use for playing with the package.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.