Local-first resume parsing and scoring system built with Streamlit, PostgreSQL, and FAISS. Implemented focus is Option A: parser + explainable scoring engine.
Introduction
AI Resume Assistant helps recruiters and hiring teams quickly evaluate resumes against a job description.
It takes uploaded resumes, converts them into structured candidate profiles, and scores each candidate across four dimensions: Exact Match, Semantic Similarity, Achievement, and Ownership. The system also applies strict rejection checks (experience and degree constraints) and produces recruiter-friendly explainability output for every score.
The project is designed to run locally with low infrastructure complexity. You can use deterministic code-first parsing by default, and optionally enable LLM fallback only for difficult resume layouts.
What This Project Does
Ingest resumes in .pdf, .docx, and .doc formats.
Ingest job descriptions as text or uploaded .pdf/.docx/.doc/.txt.
Parse resumes with a deterministic code-first parser.
Optionally use LLM fallback (OpenAI or Anthropic) when parse quality is low.
Persist parsed data and scoring outputs in PostgreSQL.
Store local vector index in FAISS for semantic workflows.
Rank candidates with weighted multi-dimensional scoring.
Generate recruiter-facing explainability output for each score.
Tech Stack
UI: Streamlit
Parsing: pdfplumber, python-docx, optional textract for .doc
Data validation: Pydantic v2
Storage: PostgreSQL (SQLAlchemy)
Vector index: FAISS (local file index)
Embeddings: sentence-transformers (all-MiniLM-L6-v2 by default)
Optional LLM fallback: OpenAI SDK and Anthropic HTTP API