Talk
Intermediate

Enough of RAG lets learn RIG

Rejected

Session Description

Large Language Models (LLMs) have revolutionized how we interact with information, but grounding their responses in verifiable facts remains a fundamental challenge. This is compounded by the fact that real-world knowledge is often scattered across numerous sources, each with its own data formats, schemas, and APIs, making it difficult to access and integrate. Lack of grounding can lead to hallucinations — instances where the model generates incorrect or misleading information. Building responsible and trustworthy AI systems is a core focus of our research, and addressing the challenge of hallucination in LLMs is crucial to achieving this goal.

Key Takeaways

None

References

Session Categories

FOSS

Speakers

jayita bhattacharyya
AI Evangelist
jayita bhattacharyya

Reviews

0 %
Approvability
0
Approvals
3
Rejections
0
Not Sure
this just seems like the average buzzwords talk, without much relation to FOSS in general
Reviewer #1
Rejected
Also very vague CFP.
Reviewer #2
Rejected
Reviewer #3
Rejected