Talk
Intermediate
First Talk

Trust, but verify: The critical battle ground of AI Safety and Security

Rejected

In this session, we explore key challenges in ensuring the safety and security of Large Language Models (LLMs). Topics include bias and fairness in AI systems, model interpretability, watermarking for provenance tracking, and regulatory considerations like GDPR. We will also discuss robustness against adversarial inputs, the role of human oversight, and best practices for fine-tuning models. Lastly, we address the risks of black-box AI and tools for enhancing transparency. Join us for an insightful discussion on securing the future of AI.

None
FOSS

jay thakkar
Research Lead Truxt AI
Speaker Image

0 %
Approvability
0
Approvals
2
Rejections
0
Not Sure
No.
Reviewer #1
Rejected
Not relevant. Please refer to the guidelines - https://forum.fossunited.org/t/talk-proposal-guidelines-for-a-foss-conference-meetup/1923
Reviewer #2
Rejected