AI-Ethics-Auditor
A Free and Open-Source toolkit to detect, explain, and mitigate bias in AI/ML models and datasets. Built for developers and researchers committed to ethical AI practices.
AI Ethics Auditor
AI Ethics Auditor is an open-source toolkit designed to identify, explain, and mitigate bias in AI/ML models and datasets. It analyzes fairness, generates SHAP/LIME explanations for predictions, and suggests reweighting strategies for biased datasets. This MVP is a local web app, with plans to expand into plugins for Jupyter, VS Code, and ML platforms in the future.
🚀 Overview
Bias Detection: Analyze datasets and models for fairness metrics like demographic parity.
Explainability: Generate SHAP/LIME explanations for flagged biases.
Mitigation: Apply reweighting or resampling to mitigate detected biases.
Local & Modular: All processing is done locally, and the core functionality is designed for easy extension into popular ML environments.
🛠️ Key Features
Fairness and bias analysis for models and datasets
Interactive charts and explainability visualizations
Reweighting strategies and fairness-aware algorithm recommendations
Ethical AI Principles & Predefined Criteria
We are using established AI ethics guidelines from:
EU AI Act
OECD AI Principles
Fairness Indicators by Google
IBM AI Fairness 360 Toolkit
📱 Usage
Analyze a Dataset:
Upload your dataset (CSV format) via the UI.
Select sensitive attributes (e.g., gender, race).
View fairness metrics and bias scores in visual charts.
Audit a Model:
Load a pretrained model (PyTorch or TensorFlow).
Run predictions on test data.
Generate SHAP explanations for any flagged biases.
Mitigate Bias:
Apply reweighting or resampling techniques to the biased data.
Download the debiased dataset or model.
Export a detailed PDF report for compliance purposes.