The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
Machine learning algorithms have been used for performing single missing data imputation and most recently, multiple imputations. However, this is the first attempt for using automated machine learning algorithms for performing both single and multiple imputation. Automated machine learning is a procedure for fine-tuning the model automatic, performing a random search for a model that results in less error, without overfitting the data. The main idea is to allow the model to set its own parameters for imputing each variable separately instead of setting fixed predefined parameters to impute all variables of the dataset. Using automated machine learning, the package fine-tunes an Elastic Net (default) or Gradient Boosting, Random Forest, Deep Learning, Extreme Gradient Boosting, or Stacked Ensemble machine learning model (from one or a combination of other supported algorithms) for imputing the missing observations. This procedure has been implemented for the first time by this package and is expected to outperform other packages for imputing missing data that do not fine-tune their models. The multiple imputation is implemented via bootstrapping without letting the duplicated observations to harm the cross-validation procedure, which is the way imputed variables are evaluated. Most notably, the package implements automated procedure for handling imputing imbalanced data (class rarity problem), which happens when a factor variable has a level that is far more prevalent than the other(s). This is known to result in biased predictions, hence, biased imputation of missing data. However, the autobalancing procedure ensures that instead of focusing on maximizing accuracy (classification error) in imputing factor variables, a fairer procedure and imputation method is practiced.
Version: | 0.3.0 |
Depends: | R (≥ 3.5.0) |
Imports: | h2o (≥ 3.34.0.0), curl (≥ 4.3.2), mice, missRanger, memuse, md.log (≥ 0.2.0) |
Published: | 2022-12-16 |
DOI: | 10.32614/CRAN.package.mlim |
Author: | E. F. Haghish [aut, cre, cph] |
Maintainer: | E. F. Haghish <haghish at uio.no> |
BugReports: | https://github.com/haghish/mlim/issues |
License: | MIT + file LICENSE |
URL: | https://github.com/haghish/mlim, https://www.sv.uio.no/psi/english/people/aca/haghish/ |
NeedsCompilation: | no |
Materials: | README |
CRAN checks: | mlim results |
Reference manual: | mlim.pdf |
Package source: | mlim_0.3.0.tar.gz |
Windows binaries: | r-devel: mlim_0.3.0.zip, r-release: mlim_0.3.0.zip, r-oldrel: mlim_0.3.0.zip |
macOS binaries: | r-release (arm64): mlim_0.3.0.tgz, r-oldrel (arm64): mlim_0.3.0.tgz, r-release (x86_64): mlim_0.3.0.tgz, r-oldrel (x86_64): mlim_0.3.0.tgz |
Old sources: | mlim archive |
Please use the canonical form https://CRAN.R-project.org/package=mlim to link to this page.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.