Democratization of media generation and sharing on social media platforms has brought into question the authenticity of content seen online. The crisis has deepened with the advent of genAI tools. Using our work with the Deepfakes Analysis Unit as the fulcrum, we will talk about the critical need for FOSS software as well as practices derived from FOSS culture in addressing misinformation and other online harms.
The [Deepfake Analysis Unit](https://www.dau.mcaindia.in/) is a collaborative space built by us for factcheckers, journalists and forensic experts and machine learning researchers to evaluate media items and flag the presence of AI manipulation in it. Launched before the Indian general elections, the DAU has analyzed more than 500 audio and visual files to provide timely information on the accuracy of content.
We will describe how our prior FOSS tools, in particular Feluda, enabled us to build the tech for the tipline in less than three months. We will also describe the processes we undertook to increase participation in a problem that's fundamentally a problem of trust, and the need to bring openness to a space filled counter intuitively with closely guarded processes and black box ML models.