AI-powered open-source platform that simulates real technical interviews, evaluates answers step-by-step, scores performance, and recommends personalized learning resources.
InterviewSim is a fully open-source AI-powered technical interview simulator designed to replicate real-world coding interviews.
Unlike traditional coding platforms that simply check for correct outputs, InterviewSim:
Acts as a live interviewer
Asks clarification questions
Evaluates reasoning step-by-step
Scores performance across multiple dimensions
Suggests improvement paths
Recommends related problems
Suggests curated learning resources
Optionally pushes results to GitHub
The goal is to simulate structured interview preparation rather than simple problem solving.
Many students preparing for technical interviews face challenges:
No realistic interview simulation
Lack of structured feedback
No communication evaluation
No guided improvement roadmap
No workflow integration for tracking preparation
Existing platforms focus on solving problems but do not simulate the interviewer mindset.
InterviewSim addresses this gap.
User pastes a full problem statement.
The system switches into interviewer mode.
AI asks:
Clarification questions
Edge case discussions
Approach validation
Complexity analysis
Optimization follow-ups
It behaves like a real human interviewer.
Each session is evaluated across:
Problem Understanding
Logical Approach
Edge Case Handling
Time & Space Complexity
Communication Clarity
Final score: 0–100%
Structured feedback is generated.
The system:
Highlights weak reasoning
Identifies missed optimizations
Explains better approaches
Provides improvement suggestions
Based on detected pattern (e.g., Sliding Window, Two Pointers, DP):
Suggests easier practice problem
Suggests similar-level problem
Suggests harder follow-up
This builds progressive learning.
If performance is low:
Curated YouTube learning resources are suggested
Concept summary notes are generated
Pattern explanation is provided
This converts failure into structured improvement.
After completing a session:
Users can choose to:
Push the solution and feedback to their GitHub repository
Or skip
If enabled, it stores:
Problem
Final solution
Score
Feedback
Notes
If a user stops midway:
Session is saved
Partial solution is stored
Can resume later
Frontend:
React
Backend:
Spring Boot
REST APIs
Service-layer architecture
JPA + PostgreSQL
AI Layer:
Ollama (local LLM runtime)
Open-source LLM models
Integrations:
GitHub REST API
Notion API
Fully self-hostable and open-source.
Uses only open-source technologies
No proprietary AI APIs
Self-hostable architecture
MIT License
Public GitHub repository
Clear contribution guidelines
Week 1:
Architecture setup
Spring Boot base project
Basic interview API
Week 2:
LLM integration
Interview flow logic
Week 3:
Scoring engine
Recommendation engine
Week 4:
UI polishing
Testing
Documentation
Demo recording
All development progress tracked via consistent GitHub commits.
InterviewSim helps:
Students preparing for placements
Self-learners without mentorship
Colleges running mock interviews
Developers tracking structured growth
It transforms interview preparation into a measurable, guided, and repeatable process.
Voice-based interview mode
Real-time code editor
Peer interview simulation
Performance analytics dashboard
College placement integration
InterviewSim aims to democratize technical interview preparation through open-source AI-driven structured simulation.