The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
PRA includes an AI agent framework with three routing modes for user input:
| Input type | Route | Example |
|---|---|---|
/command |
Deterministic — executes the tool directly, no LLM | /mcs tasks=[...] |
| Numerical data for computation | LLM tool call — the model selects and calls the right tool | “Simulate 3 tasks: Normal(10,2)…” |
| Conceptual / explanatory question | RAG — answered from the knowledge base, no tool call | “What is earned value?” |
Three interfaces are available:
/mcs,
/evm, /risk, …) - deterministic tool calls
that bypass the LLM for instant, reliable resultspra_chat()) -
programmatic R chat object powered by ellmer where the LLM selects tools
or answers from RAGpra_app()) - browser-based
experience combining all three modes, powered by shinychatDownload from https://ollama.com, then pull a model:
Slash commands provide deterministic tool execution, no LLM required.
Type /help to see all available commands, or
/help <command> for detailed usage with argument
descriptions and examples.
Each command includes argument specifications, defaults, and examples:
Run a simulation for a 3-task project directly:
set.seed(42)
r <- PRA:::execute_command(
'/mcs n=10000 tasks=[{"type":"normal","mean":10,"sd":2},{"type":"triangular","a":5,"b":10,"c":15},{"type":"uniform","min":8,"max":12}]'
)
cat(r$result)
#> Monte Carlo Simulation Results (n = 10,000):
#>
#> Summary Statistics:
#> Mean 29.9804
#> SD 3.113
#> Min 19.502
#> Max 41.3892
#>
#> Percentiles:
#> P5 24.8726
#> P10 26.0194
#> P25 27.8881
#> P50 29.9536
#> P75 32.0796
#> P90 33.9873
#> P95 35.1014# The /mcs command stores results for chaining — visualize them:
result <- PRA:::.pra_agent_env$last_mcs
hist(result$total_distribution,
freq = FALSE, breaks = 50,
main = "Monte Carlo Simulation Results",
xlab = "Total Project Duration/Cost",
col = "#18bc9c80", border = "white"
)
curve(dnorm(x, mean = result$total_mean, sd = result$total_sd),
add = TRUE, col = "#2c3e50", lwd = 2
)
abline(
v = quantile(result$total_distribution, c(0.50, 0.95)),
col = c("#3498db", "#e74c3c"), lty = 2, lwd = 1.5
)
legend("topright",
legend = c("Normal fit", "P50", "P95"),
col = c("#2c3e50", "#3498db", "#e74c3c"),
lty = c(1, 2, 2), lwd = c(2, 1.5, 1.5),
cex = 0.8, bg = "white"
)After running /mcs, chain to /contingency
for the reserve estimate:
Identify which tasks drive the most variance:
Full EVM analysis with a single command:
r <- PRA:::execute_command(
"/evm bac=500000 schedule=[0.2,0.4,0.6,0.8,1.0] period=3 complete=0.35 costs=[90000,195000,310000]"
)
cat(r$result)
#> Earned Value Management Analysis:
#>
#> Core Metrics:
#> Planned Value (PV) 300,000
#> Earned Value (EV) 175,000
#> Actual Cost (AC) 310,000
#>
#> Variances:
#> Schedule Variance (SV) -125,000
#> Cost Variance (CV) -135,000
#>
#> Performance Indices:
#> Schedule Performance Index (SPI) 0.5833
#> Cost Performance Index (CPI) 0.5645
#>
#> Forecasts:
#> EAC (Typical) 885,714.3
#> EAC (Atypical) 635,000
#> EAC (Combined) 1,296,939
#> Estimate to Complete (ETC) 575,714.3
#> Variance at Completion (VAC) -385,714.3
#> TCPI (to meet BAC) 1.7105Calculate prior risk from two root causes:
r <- PRA:::execute_command(
"/risk causes=[0.3,0.2] given=[0.8,0.6] not_given=[0.2,0.4]"
)
cat(r$result)
#> Bayesian Risk Analysis (Prior):
#>
#> Risk Probability 0.82
#> Risk Percentage 82%
#> Number of Causes 2Then update with observations — Cause 1 occurred, Cause 2 unknown:
r <- PRA:::execute_command(
"/risk_post causes=[0.3,0.2] given=[0.8,0.6] not_given=[0.2,0.4] observed=[1,null]"
)
cat(r$result)
#> Bayesian Risk Analysis (Posterior):
#>
#> Posterior Risk Probability 0.6316
#> Posterior Risk Percentage 63.16%
#>
#> Observations:
#> Cause 1: Occurred
#> Cause 2: UnknownQuick analytical estimate without simulation:
Missing or invalid arguments produce helpful error messages:
# Missing required arguments
r <- PRA:::execute_command("/risk causes=[0.3]")
cat(r$result)
#> **Missing required argument(s):** given, not_given
#>
#> ### /risk — Bayesian Risk (Prior)
#>
#> Calculate prior risk probability from root causes using Bayes' theorem.
#>
#> **Arguments:**
#> - **causes** *(required)* — JSON array of cause probabilities, e.g. [0.3, 0.2]
#> - **given** *(required)* — JSON array of P(Risk | Cause), e.g. [0.8, 0.6]
#> - **not_given** *(required)* — JSON array of P(Risk | not Cause), e.g. [0.2, 0.4]
#>
#> **Examples:**
#> /risk causes=[0.3,0.2] given=[0.8,0.6] not_given=[0.2,0.4]# Unknown command
r <- PRA:::execute_command("/simulate")
cat(r$result)
#> Unknown command: **/simulate**
#>
#> Did you mean one of these?
#> - /mcs
#> - /smm
#> - /contingency
#> - /sensitivity
#> - /evm
#> - /risk
#> - /risk_post
#> - /learning
#> - /dsm
#>
#> Type `/help` for a list of all commands.The chat interface routes queries through the LLM, which decides whether to call a tool or answer from RAG context:
library(PRA)
chat <- pra_chat(model = "llama3.2")
# Tool call: user provides numerical data
chat$chat("Run a Monte Carlo simulation for a 3-task project with
Task A ~ Normal(10, 2), Task B ~ Triangular(5, 10, 15),
Task C ~ Uniform(8, 12). Use 10,000 simulations.")
# RAG: conceptual question, no computation needed
chat$chat("What is the difference between SPI and CPI?")For guaranteed reliability with computations, use
/commands instead. The chat interface is best suited for
exploratory questions and interpretation.
For better accuracy with complex queries, supply a pre-configured ellmer chat object:
llama3.2 (3B) handles simple single-tool queries; larger
models (8B+) are more reliable for multi-step chains./commands for deterministic results.For a browser-based experience with streaming responses and inline visualizations:
The app supports all three input modes in the same chat panel:
/mcs tasks=[...] for instant
deterministic resultsClicking an example prompt executes the /command
instantly and displays rich results with tables and plots:
/commands
(deterministic), natural language (LLM tool calls), and conceptual
questions (RAG)/commands on clickThe agent is enhanced with domain knowledge through retrieval-augmented generation (RAG). When RAG context is retrieved, the agent cites the source files in its response.
| File | Topics |
|---|---|
mcs_methods.md |
Distribution selection, correlation, interpreting percentiles |
evm_standards.md |
EVM metrics, performance indices, forecasting methods |
bayesian_risk.md |
Prior/posterior risk, Bayes’ theorem for root cause analysis |
learning_curves.md |
Sigmoidal models (logistic, Gompertz, Pearl), curve fitting |
sensitivity_contingency.md |
Variance decomposition, contingency reserves |
pra_functions.md |
PRA package function reference |
[Source: filename] tags| Command | Description |
|---|---|
/mcs |
Monte Carlo simulation with task distributions |
/smm |
Second Moment Method (analytical estimate) |
/contingency |
Contingency reserve from last MCS |
/sensitivity |
Variance contribution per task |
/evm |
Full Earned Value Management analysis |
/risk |
Bayesian prior risk probability |
/risk_post |
Bayesian posterior risk after observations |
/learning |
Sigmoidal learning curve fit and prediction |
/dsm |
Design Structure Matrix |
/help |
List all commands or get help for one |
| Module | Tool | Use case |
|---|---|---|
| Simulation | mcs_tool |
Full Monte Carlo with distributions |
| Analytical | smm_tool |
Quick mean/variance estimate |
| Post-MCS | contingency_tool |
Reserve at confidence level |
| Post-MCS | sensitivity_tool |
Variance contribution per task |
| EVM | evm_analysis_tool |
All 12 EVM metrics in one call |
| Bayesian | risk_prob_tool |
Prior risk from root causes |
| Bayesian | risk_post_prob_tool |
Posterior risk after observations |
| Bayesian | cost_pdf_tool |
Prior cost distribution |
| Bayesian | cost_post_pdf_tool |
Posterior cost distribution |
| Learning | fit_and_predict_sigmoidal_tool |
Pearl/Gompertz/Logistic |
| DSM | parent_dsm_tool |
Resource-task dependencies |
| DSM | grandparent_dsm_tool |
Risk-resource-task dependencies |
PRA includes an evaluation framework for measuring LLM tool-calling
accuracy using the vitals
package. The evaluation suite in inst/eval/pra_eval.R tests
15 scenarios across three tiers:
| Tier | Description | Example |
|---|---|---|
| Single-tool | One tool call | “Simulate 3 tasks with distributions…” |
| Multi-tool chain | Sequential tool calls | “Run MCS then calculate contingency at 95%” |
| Open-ended | Requires interpretation | “My project is behind schedule, here’s EVM data…” |
Small models sometimes describe what they would do rather than
actually calling tools. Use /commands for reliable
deterministic execution:
# Instead of asking the LLM:
chat$chat("Run a Monte Carlo simulation...")
# Use the /command directly in the app:
# /mcs tasks=[{"type":"normal","mean":10,"sd":2}]Other workarounds for LLM chat:
llama3.1 (8B) or larger for better tool calling
than 3B modelspra_chat(chat = ellmer::chat_openai(model = "gpt-4o"))llama3.2 is 3B, faster than
llama3.1 8B)ollama ps)pra_chat(rag = FALSE)/commands — they execute instantly without LLM
overheadIf the agent does not cite sources in RAG-enabled responses, try:
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.