The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
This vignette shows the package-native route from a fitted many-facet Rasch model to manuscript-oriented prose, tables, figure notes, and revision checks.
The reporting stack in mfrmr is organized around four
objects:
fit: the fitted model from fit_mfrm()diag: diagnostics from
diagnose_mfrm()chk: the revision guide from
reporting_checklist()apa: structured manuscript outputs from
build_apa_outputs()For a broader workflow view, see
vignette("mfrmr-workflow", package = "mfrmr"). For a
plot-first route, see
vignette("mfrmr-visual-diagnostics", package = "mfrmr").
Use reporting_checklist() first when the question is
“what is still missing?” rather than “how do I phrase the results?”
chk <- reporting_checklist(fit, diagnostics = diag)
head(
chk$checklist[, c("Section", "Item", "DraftReady", "Priority", "NextAction")],
10
)
#> Section Item DraftReady Priority
#> 1 Method Section Model specification TRUE ready
#> 2 Method Section Data description TRUE ready
#> 3 Method Section Precision basis TRUE ready
#> 4 Method Section Convergence TRUE ready
#> 5 Method Section Connectivity assessed TRUE ready
#> 6 Global Fit Standardized residuals TRUE ready
#> 7 Global Fit PCA of residuals FALSE medium
#> 8 Facet-Level Statistics Separation / strata / reliability TRUE ready
#> 9 Facet-Level Statistics Fixed/random variability summary TRUE ready
#> 10 Facet-Level Statistics RMSE and true SD TRUE ready
#> NextAction
#> 1 Available; adapt this evidence into the manuscript draft after methodological review.
#> 2 Available; adapt this evidence into the manuscript draft after methodological review.
#> 3 Report the precision tier as model-based in the APA narrative.
#> 4 Available; adapt this evidence into the manuscript draft after methodological review.
#> 5 Document the connectivity result before making common-scale or linking claims.
#> 6 Use standardized residuals as screening diagnostics, not as standalone proof of model adequacy.
#> 7 Run residual PCA if you want to comment on unexplained residual structure.
#> 8 Report facet reliability/separation directly in the APA results section.
#> 9 Use the fixed/random variability summary in the results text or table notes.
#> 10 Available; adapt this evidence into the manuscript draft after methodological review.Interpretation:
DraftReady flags whether the current objects already
support a section for drafting with the package’s documented
caveats.Priority shows what to resolve first.NextAction is the shortest package-native instruction
for closing the gap.mfrmr intentionally distinguishes
model_based, hybrid, and
exploratory precision tiers.
prec <- precision_audit_report(fit, diagnostics = diag)
prec$profile
#> Method Converged PrecisionTier SupportsFormalInference HasFallbackSE
#> 1 MML TRUE model_based TRUE FALSE
#> PersonSEBasis NonPersonSEBasis
#> 1 Posterior SD (EAP) Observed information (MML)
#> CIBasis
#> 1 Normal interval from model-based SE
#> ReliabilityBasis
#> 1 Observed variance with model-based and fit-adjusted error bounds
#> HasFitAdjustedSE HasSamplePopulationCoverage
#> 1 TRUE TRUE
#> RecommendedUse
#> 1 Use for primary reporting of SE, CI, and reliability in this package.
prec$checks
#> Check Status
#> 1 Precision tier pass
#> 2 Optimizer convergence pass
#> 3 ModelSE availability pass
#> 4 Fit-adjusted SE ordering pass
#> 5 Reliability ordering pass
#> 6 Facet precision coverage pass
#> 7 SE source labels pass
#> Detail
#> 1 This run uses the package's model-based precision path.
#> 2 The optimizer reported convergence.
#> 3 Finite ModelSE values were available for 100.0% of rows.
#> 4 Fit-adjusted SE values were not smaller than their paired ModelSE values.
#> 5 Conservative reliability values were not larger than the model-based values.
#> 6 Each facet had sample/population summaries for both model and fit-adjusted SE modes.
#> 7 Person and non-person SE labels match the MML precision path.Interpretation:
model_based.hybrid and exploratory outputs more
conservatively, especially for SE-, CI-, and reliability-heavy
prose.build_apa_outputs() is the writing engine. It returns
report text plus a section map, note map, and caption map that all share
the same internal contract.
apa <- build_apa_outputs(
fit,
diagnostics = diag,
context = list(
assessment = "Writing assessment",
setting = "Local scoring study",
scale_desc = "0-4 rubric scale",
rater_facet = "Rater"
)
)
cat(apa$report_text)
#> Method.
#>
#> Design and data.
#> The analysis focused on Writing assessment in Local scoring study. A many-facet Rasch model
#> (MFRM) was fit to 768 observations from 48 persons scored on a 4-category scale (1-4). The
#> design included facets for Rater (n = 4), Criterion (n = 4). The rating scale was described
#> as 0-4 rubric scale.
#>
#> Estimation settings.
#> The RSM specification was estimated using MML in the native R MFRM package. Model-based
#> precision summaries were available for this run. Recommended use for this precision
#> profile: Use for primary reporting of SE, CI, and reliability in this package..
#> Optimization converged after 51 function evaluations (LogLik = -903.081, AIC = 1824.161,
#> BIC = 1865.955). Constraint settings: noncenter facet = Person; anchored levels = 0
#> (facets: none); group anchors = 0 (facets: none); dummy facets = none.
#>
#> Results.
#>
#> Scale functioning.
#> Category usage was adequate (unused categories = 0, low-count categories = 0), and
#> thresholds were ordered. Step/threshold summary: 3 step(s); estimate range = -1.30 to 1.35
#> logits; no disordered steps.
#>
#> Facet measures.
#> Person measures ranged from -2.02 to 2.33 logits (M = 0.03, SD = 1.01). Rater measures
#> ranged from -0.32 to 0.33 logits (M = -0.00, SD = 0.31). Criterion measures ranged from
#> -0.41 to 0.24 logits (M = 0.00, SD = 0.28).
#>
#> Fit and precision.
#> Overall fit was acceptable (infit MnSq = 0.99, outfit MnSq = 1.00). 1 of 56 elements
#> exceeded the 0.5-1.5 fit range. Largest misfit signals: Person:P023 (|metric| = 2.09);
#> Criterion:Organization (|metric| = 1.69); Person:P018 (|metric| = 1.45). Criterion
#> reliability = 0.91 (separation = 3.21). Person reliability = 0.90 (separation = 3.06).
#> Rater reliability = 0.93 (separation = 3.51). For Rater, exact agreement = 0.36, expected
#> exact agreement = 0.37, adjacent agreement = 0.83.
#>
#> Residual structure.
#> Exploratory residual PCA (overall standardized residual matrix) showed PC1 eigenvalue =
#> 2.10 (13.2% variance), with PC2 eigenvalue = 1.79. Facet-specific exploratory residual PCA
#> showed the largest first-component signal in Rater (eigenvalue = 1.55, 38.7% variance).
#> Heuristic reference bands: EV >= 1.4 (critical minimum), >= 1.5 (caution), >= 2.0 (common),
#> >= 3.0 (strong); variance >= 5% (minor), >= 10% (caution), >= 20% (strong).apa$section_map[, c("SectionId", "Heading", "Available")]
#> SectionId Heading Available
#> 1 method_design Design and data TRUE
#> 2 method_estimation Estimation settings TRUE
#> 3 results_scale Scale functioning TRUE
#> 4 results_measures Facet measures TRUE
#> 5 results_fit_precision Fit and precision TRUE
#> 6 results_residual_structure Residual structure TRUE
#> 7 results_bias_screening Bias screening FALSE
#> 8 results_cautions Reporting cautions FALSEInterpretation:
report_text is the compact narrative output.section_map is the machine-readable map of what text
blocks are available.Use apa_table() when you want reproducible handoff
tables without rebuilding captions or notes by hand.
tbl_summary <- apa_table(fit, which = "summary")
tbl_reliability <- apa_table(fit, which = "reliability", diagnostics = diag)
tbl_summary$caption
#> [1] "Table 1\nFacet Summary (Measures, Precision, Fit, Reliability)"
tbl_reliability$note
#> [1] "Separation and reliability are based on observed variance, measurement error, and adjusted true variance. Overall fit: infit MnSq = 0.99, outfit MnSq = 1.00. Rater facet (Rater) reliability = 0.93, separation = 3.51. For Rater, exact agreement = 0.36, expected exact agreement = 0.37, adjacent agreement = 0.83."The actual table data are stored in tbl_summary$table
and tbl_reliability$table.
For reporting workflows, build_visual_summaries() is the
bridge between statistical results and figure-ready plot payloads.
vis <- build_visual_summaries(
fit,
diagnostics = diag,
threshold_profile = "standard"
)
names(vis)
#> [1] "warning_map" "summary_map" "warning_counts"
#> [4] "summary_counts" "crosswalk" "branch"
#> [7] "style" "threshold_profile"
names(vis$warning_map)
#> [1] "wright_map" "pathway_map" "facet_distribution"
#> [4] "step_thresholds" "category_curves" "observed_expected"
#> [7] "fit_diagnostics" "fit_zstd_distribution" "misfit_levels"
#> [10] "residual_pca_overall" "residual_pca_by_facet"When bias or local interaction screens matter, keep the wording conservative. The package treats these outputs as screening-oriented unless the current precision and design evidence justify stronger claims.
bias_df <- load_mfrmr_data("example_bias")
fit_bias <- fit_mfrm(
bias_df,
person = "Person",
facets = c("Rater", "Criterion"),
score = "Score",
method = "MML",
model = "RSM",
quad_points = 7
)
diag_bias <- diagnose_mfrm(fit_bias, residual_pca = "none")
bias <- estimate_bias(fit_bias, diag_bias, facet_a = "Rater", facet_b = "Criterion")
apa_bias <- build_apa_outputs(fit_bias, diagnostics = diag_bias, bias_results = bias)
apa_bias$section_map[, c("SectionId", "Available", "Heading")]
#> SectionId Available Heading
#> 1 method_design TRUE Design and data
#> 2 method_estimation TRUE Estimation settings
#> 3 results_scale TRUE Scale functioning
#> 4 results_measures TRUE Facet measures
#> 5 results_fit_precision TRUE Fit and precision
#> 6 results_residual_structure TRUE Residual structure
#> 7 results_bias_screening TRUE Bias screening
#> 8 results_cautions TRUE Reporting cautionsFor a compact manuscript-oriented route:
fit_mfrm()diagnose_mfrm()precision_audit_report()reporting_checklist()build_apa_outputs()apa_table()build_visual_summaries()These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.