AI Ethics Auditor is an open-source toolkit designed to identify, explain, and mitigate bias in AI/ML models and datasets. It analyzes fairness, generates SHAP/LIME explanations for predictions, and suggests reweighting strategies for biased datasets. This MVP is a local web app, with plans to expand into plugins for Jupyter, VS Code, and ML platforms in the future.
Bias Detection: Analyze datasets and models for fairness metrics like demographic parity.
Explainability: Generate SHAP/LIME explanations for flagged biases.
Mitigation: Apply reweighting or resampling to mitigate detected biases.
Local & Modular: All processing is done locally, and the core functionality is designed for easy extension into popular ML environments.
Fairness and bias analysis for models and datasets
Interactive charts and explainability visualizations
Reweighting strategies and fairness-aware algorithm recommendations
We are using established AI ethics guidelines from:
EU AI Act
OECD AI Principles
Fairness Indicators by Google
IBM AI Fairness 360 Toolkit
Analyze a Dataset:
Upload your dataset (CSV format) via the UI.
Select sensitive attributes (e.g., gender, race).
View fairness metrics and bias scores in visual charts.
Audit a Model:
Load a pretrained model (PyTorch or TensorFlow).
Run predictions on test data.
Generate SHAP explanations for any flagged biases.
Mitigate Bias:
Apply reweighting or resampling techniques to the biased data.
Download the debiased dataset or model.
Export a detailed PDF report for compliance purposes.