🔒 Privacy-First · Local Execution

Discover whether your data can predict outcomes — in minutes, without code.

Meta-Model Builder guides you from raw data to side-by-side model performance in a private, self-contained environment — no IT, no configuration, no data sharing.

No installation required Runs locally in your environment Built for analysts, consultants, and educators
View Documentation

How It Works

Four guided steps take you from a raw CSV file to a fully validated model comparison — no expertise required.

Data Preview

Upload your CSV and receive instant data quality feedback — missing values, class imbalance, and unstable predictors flagged automatically before any training begins.

Model Training

Select your target variable and train multiple classifiers simultaneously using stratified splits, cross-validation, or time-series evaluation strategies.

Model Performance

Review accuracy, precision, recall, F1 score, and ROC AUC per model. Confusion matrices and ROC curves update in real-time with cost-optimal threshold calibration.

Model Comparison

Compare all models side by side in a sortable table with best values per metric automatically highlighted — then export a production-ready model bundle.

What Makes It Different

Instant Data Quality Feedback

Your data is screened and validated before a single model trains. Column names, missing values, class imbalance, and unstable predictors are all flagged with clear, actionable notices.

Multi-Model Comparison

Train and compare multiple classification approaches in a single session. Best-performing metrics are highlighted automatically so you can identify your strongest model at a glance.

Completely Private

All computation runs in your local session. No data is ever transmitted, stored, or shared externally — making it safe for sensitive datasets in regulated environments.

Built for Privacy and Trust

  • Runs entirely on your machine — no server, no cloud infrastructure, no third-party processing. Your data stays where it belongs.
  • No data upload, no storage, no tracking — data is read into memory for your active session only and is never written to disk or transmitted anywhere.
  • Ethical framing built in — outputs are clearly labelled as indicative, not decisions. A persistent advisory reminds users that model results should not be the sole basis for consequential choices.
  • Transparent positive class encoding — the tool explicitly discloses which class is treated as positive and why, so metric interpretation is never ambiguous.

Who It's For

Domain Experts

Healthcare, Finance, HR & Marketing

Run fast feasibility assessments on your own data without writing code or involving IT. Test the predictive hypothesis yourself before escalating to a full analytical engagement.

Consultants

Independent Analysts & Freelancers

Produce rapid, client-demonstrable model comparisons during the scoping phase of engagements — without committing to a full technical build or sharing client data with any third party.

Education

Educators & Students

Explore the full binary classification pipeline interactively in a single session — no software installation, no programming required. Ideal for teaching data quality, model evaluation, and the train/test split.

Why Meta-Model Builder?

The only tool that combines local privacy, no-code access, and multi-model comparison in a single session.

vs.

Spreadsheets & Excel

No manual setup, no formulas, no plugins required. From CSV to model comparison in under 5 minutes — Excel can't train or validate a classifier.

vs.

BI Platforms (Tableau, Power BI)

Purpose-built for predictive classification, not dashboarding. Includes confusion matrices, ROC curves, and cost-optimal threshold calibration out of the box.

vs.

Cloud AutoML Tools

Your data never leaves your machine — no upload, no account, no data sharing agreement. No GDPR risk, no HIPAA exposure, no sovereignty concerns.

Understanding Your Results

Every metric is explained in plain language so you can interpret results without statistical training.

Accuracy
How often the model predicts the correct outcome overall
Precision
When the model predicts positive, how often is it right?
Recall
How well the model detects all actual positive cases
F1 Score
A combined measure of precision and recall
ROC AUC
How well the model separates the two outcomes — closer to 1.0 is better

Ready to see what your data can do?

Try the interactive mockup and explore the full pipeline — no installation, no sign-up.

What to Expect

This tool is designed for rapid feasibility assessment. Here is what it does — and does not — do.

Current Scope & Limitations

  • Binary outcomes only — the target variable must have exactly 2 classes
  • Clean and preprocessed data required — missing values must be handled before upload
  • No parameter tuning — default model settings are used for all classifiers
  • Designed for rapid feasibility assessment rather than production deployment
  • Results are indicative only and should not be the sole basis for consequential decisions

Market Opportunity

The global credit analytics and AutoML market is large, fragmented, and underserved at the privacy-first, no-code tier.

Models validated on held-out test sets Real-time threshold recalibration Explainable AI (SHAP) roadmap Local execution — zero data exposure Compute-efficient pricing Production-ready model export

Our pricing is designed to reflect actual compute costs — clients pay fairly based on the scale of their data, with no surprises and no waste. Detailed pricing methodology is available on request.

Our Commitment

"We believe predictive decisions should be explainable, auditable, and fair. Our platform is built on open science, rigorous validation, and a commitment to responsible AI."

Client data is processed under strict data processing agreements. We never access raw client data without explicit written consent. All model outputs are labelled as indicative and are designed to support — not replace — human judgement.

Simple, Transparent Access

Start for free. Scale when you're ready.

Tier 1
Free
€0 / month

Full pipeline access with no sign-up and no login required. All four workflow steps, all three classifiers, model export included. For individuals, students, and early adopters exploring feasibility.

Tier 3
Consulting
Project-based

The tool is free. Analytical interpretation, validation, methodology review, and follow-on work are commercially scoped. Ideal for professional services contexts where expert guidance is as important as the tool itself.

Discuss a Project →

Our Team

Meta-Model Builder was built by a small team passionate about making predictive analytics accessible to everyone, regardless of technical background. We combine deep expertise in applied statistics, software engineering, and responsible AI to deliver tools that are rigorous, transparent, and genuinely useful.

[Name Placeholder]
Founder & Lead Analyst

Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.

[Name Placeholder]
Data Science Engineer

Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.

[Name Placeholder]
UX & Product Design

Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.

[Name Placeholder]
Research & Validation

Placeholder one-line biography describing background, expertise, and focus area. Replace with real content.

Try Meta-Model Builder

Live tool coming soon. The live application will be embedded here once the full application is complete. The interactive mockup below reflects exactly what you will be working with.

Drag & drop your CSV here

or click to browse · Max 100 MB · CSV files only

2 classes detected in target column: Yes / No. Target distribution: Yes 52% · No 48% — class balance looks good.
AgeIncome (€k)Tenure (yr)Credit ScorePurchased
34523720Yes
45887680No
29411611Yes
5211512755No

We check your data quality before any training begins. Missing values, unstable predictors, and identifier-like columns are screened automatically.

Select which models to compare:

Train / Test Split 80% training · 20% test

(button active once data is uploaded)

Logistic Regression trained successfully.
Random Forest trained successfully.
Naive Bayes trained successfully.

Select which models to compare and how to split your data. Stratified splitting ensures class balance in both sets.

Showing: Random Forest · Positive class: Yes

0.87
Accuracy
0.84
Precision
0.81
Recall
0.82
F1 Score
0.91
ROC AUC
Confusion Matrix

TP: 82 · FN: 19 · FP: 13 · TN: 86

ROC Curve

AUC = 0.91 · Threshold = 0.50

Each model gets its own performance breakdown. Adjust the decision threshold using the cost-weighting controls to optimise for your specific false-positive and false-negative costs.

Model Accuracy Precision Recall F1 Score ROC AUC
Random Forest 0.87 0.84 0.81 0.82 0.91
Logistic Regression 0.83 0.80 0.84 0.82 0.88
Naive Bayes 0.79 0.76 0.78 0.77 0.85

Best values per metric are highlighted automatically so you can identify the strongest model at a glance. Export the best model as a production-ready bundle for use in the Predictor app.

Want early access? Get in touch.

Join our pilot cohort or reach out to discuss institutional access and consulting engagements.