Bridges the gap between valid code and usable design by detecting real-world accessibility issues and generating automated fixes.
The Accessibility Gap : There are over 1 billion people worldwide living with disabilities, yet the majority of the digital landscape remains inaccessible for them.
Limitations of Traditional Tooling : Standard scanners like Lighthouse and axe-core are built for code compliance rather than the actual human experience. They are often "blind" to:
Confusing or unintuitive visual layouts.
Insufficient contrast in dynamic or interactive states.
Non-descriptive content that blocks user navigation.
AccessLens fixes these gaps by looking at how a website actually feels to every existing human being, not just how the code is written.
Working : - We mix standard code checks with smart AI vision to:
Find usability problems that regular tools often miss.
Spot issues in modern, moving parts of a website (dynamic UI).
Provide the exact code fixes you need to solve the problem instantly.
Real-time Orchestration: Parallel execution across multiple accessibility engines.
Perceptual Analysis: Visual and contextual detection of UX-level barriers.
Automated Remediation: Instant fix suggestions with side-by-side code diffs.
Cyber-HUD Dashboard: A real-time spatial HUD to map issues directly onto the live UI.
Confidence Scoring: AI-backed prioritization of critical accessibility threats.
Frontend: Next.js, Tailwind CSS, Framer Motion
Backend: FastAPI, Playwright, Redis
Intelligence: LLaVA (Vision), Mistral 7B (Remediation)
Infrastructure: SQLite, Redis
Project Links
Source Code: AccessLens on GitHub
Live Platform: accesslens-azure.vercel.app
License: MIT
AccessLens is an open-source project. We welcome contributions in:
Accessibility Engines: Improving rule sets and heuristics.
AI Integration: Refining vision prompts and remediation models.
UI/UX: Enhancing the Cyber-HUD and dashboard experience.