Large Language Models (LLMs) have revolutionized how we interact with information, but grounding their responses in verifiable facts remains a fundamental challenge. This is compounded by the fact that real-world knowledge is often scattered across numerous sources, each with its own data formats, schemas, and APIs, making it difficult to access and integrate. Lack of grounding can lead to hallucinations — instances where the model generates incorrect or misleading information. Building responsible and trustworthy AI systems is a core focus of our research, and addressing the challenge of hallucination in LLMs is crucial to achieving this goal.