Talk
Intermediate
First Talk

Is That Even Tested? The Hidden World of Quality in FOSS Projects

Rejected

Session Description

Open source software today powers everything from browsers to satellites. People use it, build on top of it, and trust it to run their systems. But behind that trust lies a simple question that many forget to ask, is it even tested?

In open source, code speaks loudest. Features get built, bugs get fixed, and pull requests keep coming. But testing often sits quietly in the background, waiting for attention. There are no dedicated QA teams in most open-source projects. There is no formal test plan or checklist. And yet, some of the most reliable tools we use come from these very communities. How does that happen? And when does it fall apart?

This talk explores the hidden world of quality in open-source software. It looks at how testing happens, who does it, and why it often remains invisible. It also reflects on what is missed when testing is treated as an afterthought.

I have seen contributors who write flawless documentation but feel unsure about testing. I have seen projects with great potential stumble because no one tested how their tool behaves in real-world scenarios. And I have seen others flourish simply because someone cared enough to ask basic questions, what if this fails, what if the user does not do what we expect?

We will walk through real examples of how testing looks in open source. From handwritten unit tests to community-driven issue reports, from CI pipelines held together with duct tape to volunteers running manual tests at odd hours. We will also talk about the emotional side of it, the patience needed to report bugs that may never get a reply, and the quiet joy when a simple test saves a big release.

The goal is not to point fingers but to open a conversation. How can we make testing more visible in our projects? What can maintainers do to create space for testers? And how can contributors who are not confident in writing code still add value by thinking about quality?

Testing in open source is not just about tools or scripts. It is about trust. It is about the quiet effort that keeps the lights on and the software usable. This talk is a small attempt to give that effort the spotlight it deserves.

Whether you are a maintainer, a tester, a curious contributor, or someone who simply uses open-source tools every day, this session will leave you with stories, questions, and ideas on how to build better, together.

Key Takeaways

Key Takeaways:

  • Many open source projects silently carry the weight of quality without making it visible. This talk brings that hidden world to light.

  • Testing in open source is often inconsistent because there is no formal process or dedicated team. Understanding this helps us approach contributions more mindfully.

  • Quality is not just about writing test cases. It is also about thinking from the user’s perspective, spotting the untested corners, and asking the right questions.

  • Testing in open source is not always celebrated like code contributions, but it plays a crucial role in building trust and stability.

  • Contributors can add value not just by writing features, but by reviewing test coverage, writing test documentation, or suggesting better test strategies.

  • Project maintainers can create a culture where quality is everyone’s responsibility, not just something done at the end.

  • This session helps the audience see testing not as a task, but as a mindset that strengthens every pull request, every issue, and every release.

References

Session Categories

Contributing to FOSS
Which track are you applying for?
Main track

Speakers

Ujjwal Kumar singh
SDET Skeps
https://www.linkedin.com/in/ujjwal-k-singh/
Ujjwal Kumar singh

Reviews

0 %
Approvability
0
Approvals
3
Rejections
0
Not Sure

This is an interesting topic that isn't discussed often enough at OSS conferences. However, the proposal is lacking any case studies or data metrics. Furthermore, the linked blog post looks AI generated.

Reviewer #1
Rejected

Not enough references to evaluate this proposal. And please cut down on the usage of AI.

Reviewer #2
Rejected

The topic is interesting, but we felt the proposal lacked specific case studies or data metrics to support the claims. We encourage you to resubmit a proposal in the future with original content and concrete examples from your experience in FOSS projects.

Reviewer #3
Rejected