Meta-Model Builder guides you from raw data to side-by-side model performance in a private, self-contained environment — no IT, no configuration, no data sharing.
Workflow
Four guided steps take you from a raw CSV file to a fully validated model comparison — no expertise required.
Upload your CSV and receive instant data quality feedback — missing values, class imbalance, and unstable predictors flagged automatically before any training begins.
Select your target variable and train multiple classifiers simultaneously using stratified splits, cross-validation, or time-series evaluation strategies.
Review accuracy, precision, recall, F1 score, and ROC AUC per model. Confusion matrices and ROC curves update in real-time with cost-optimal threshold calibration.
Compare all models side by side in a sortable table with best values per metric automatically highlighted — then export a production-ready model bundle.
Features
Your data is screened and validated before a single model trains. Column names, missing values, class imbalance, and unstable predictors are all flagged with clear, actionable notices.
Train and compare multiple classification approaches in a single session. Best-performing metrics are highlighted automatically so you can identify your strongest model at a glance.
All computation runs in your local session. No data is ever transmitted, stored, or shared externally — making it safe for sensitive datasets in regulated environments.
Privacy & Trust
Audience
Run fast feasibility assessments on your own data without writing code or involving IT. Test the predictive hypothesis yourself before escalating to a full analytical engagement.
Produce rapid, client-demonstrable model comparisons during the scoping phase of engagements — without committing to a full technical build or sharing client data with any third party.
Explore the full binary classification pipeline interactively in a single session — no software installation, no programming required. Ideal for teaching data quality, model evaluation, and the train/test split.
Positioning
The only tool that combines local privacy, no-code access, and multi-model comparison in a single session.
No manual setup, no formulas, no plugins required. From CSV to model comparison in under 5 minutes — Excel can't train or validate a classifier.
Purpose-built for predictive classification, not dashboarding. Includes confusion matrices, ROC curves, and cost-optimal threshold calibration out of the box.
Your data never leaves your machine — no upload, no account, no data sharing agreement. No GDPR risk, no HIPAA exposure, no sovereignty concerns.
Metrics
Every metric is explained in plain language so you can interpret results without statistical training.
Try the interactive mockup and explore the full pipeline — no installation, no sign-up.
Honest Assessment
This tool is designed for rapid feasibility assessment. Here is what it does — and does not — do.
For Investors
The global credit analytics and AutoML market is large, fragmented, and underserved at the privacy-first, no-code tier.
Our pricing is designed to reflect actual compute costs — clients pay fairly based on the scale of their data, with no surprises and no waste. Detailed pricing methodology is available on request.
Mission
"We believe predictive decisions should be explainable, auditable, and fair. Our platform is built on open science, rigorous validation, and a commitment to responsible AI."
Client data is processed under strict data processing agreements. We never access raw client data without explicit written consent. All model outputs are labelled as indicative and are designed to support — not replace — human judgement.
Pricing
Start for free. Scale when you're ready.
Full pipeline access with no sign-up and no login required. All four workflow steps, all three classifiers, model export included. For individuals, students, and early adopters exploring feasibility.
For organisations needing scale, controlled deployment, support, or compliance assurances. Priced per organisation — not per seat. Includes compliance documentation, SLA, and team access management.
Get in Touch →The tool is free. Analytical interpretation, validation, methodology review, and follow-on work are commercially scoped. Ideal for professional services contexts where expert guidance is as important as the tool itself.
Discuss a Project →The People
Meta-Model Builder was built by a small team passionate about making predictive analytics accessible to everyone, regardless of technical background. We combine deep expertise in applied statistics, software engineering, and responsible AI to deliver tools that are rigorous, transparent, and genuinely useful.
Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.
Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.
Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.
Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.
Interactive Preview
Drag & drop your CSV here
or click to browse · Max 100 MB · CSV files only
| Age | Income (€k) | Tenure (yr) | Credit Score | Purchased |
|---|---|---|---|---|
| 34 | 52 | 3 | 720 | Yes |
| 45 | 88 | 7 | 680 | No |
| 29 | 41 | 1 | 611 | Yes |
| 52 | 115 | 12 | 755 | No |
We check your data quality before any training begins. Missing values, unstable predictors, and identifier-like columns are screened automatically.
Select which models to compare:
(button active once data is uploaded)
Select which models to compare and how to split your data. Stratified splitting ensures class balance in both sets.
Showing: Random Forest · Positive class: Yes
TP: 82 · FN: 19 · FP: 13 · TN: 86
AUC = 0.91 · Threshold = 0.50
Each model gets its own performance breakdown. Adjust the decision threshold using the cost-weighting controls to optimise for your specific false-positive and false-negative costs.
| Model | Accuracy | Precision | Recall | F1 Score | ROC AUC |
|---|---|---|---|---|---|
| Random Forest | 0.87 | 0.84 | 0.81 | 0.82 | 0.91 |
| Logistic Regression | 0.83 | 0.80 | 0.84 | 0.82 | 0.88 |
| Naive Bayes | 0.79 | 0.76 | 0.78 | 0.77 | 0.85 |
Best values per metric are highlighted automatically so you can identify the strongest model at a glance. Export the best model as a production-ready bundle for use in the Predictor app.
Join our pilot cohort or reach out to discuss institutional access and consulting engagements.