As a software engineer we have solved atleast few dsa questions on leetcode, codechef, gfg, interviewbit, scaler or any platform.
A backend service that "judges" code submissions for DSA(Data Structures & Algorithms ) problems.
When a user submits a solution, this service evaluates it by running the code against hundred of testcases to check for correctness, performance, and constraints.
I'll highly talk on behind the scenes of large-scale code judging systems, by taking an example of Scaler. From how problems and testcases are stored, to how caching, file storage, and load balancing work together i’ll break down the backend components that powers code evaluation at scale.
>How millions of testcases (sometimess >1GB/problem) are stored and accessed
>Why local caching beats global caching in our scenario
>How backend services fetch user, problem, and testcase info with minimal latency
>Invalidation logic: what happens when testcases change after deployment
>Design tradeoffs in building scalable, distributed judge systems