As businesses increasingly rely on LLM applications for their important functions, it becomes important to implement strong security measures to protect sensitive information and guarantee smooth operations. This session shows how to build a zero-trust security architecture for AI workloads using cloud native patterns. We'll explore how to implement AI Gateways that have strong authentication and authorization and include audit logging. Keep compliance and governance requirements while you secure model artifacts and implement runtime security and protect against prompt injection attacks.
Think of an AI Gateway as your smart security guard - it's the first line of defense for your AI applications. Just like you wouldn't let strangers into your house, the gateway makes sure only authorized users and applications can access your AI models. It checks IDs (authentication), verifies permissions (authorization), and keeps detailed records of who's doing what (audit logging).
The best part? You can implement all this without sacrificing performance. Securing AI isn't just about protecting against threats - it's about building trust. Thus learn about the architecture to secure AI workloads
he audience can be security professionals or AI/ML practitioners. This talk will help you understand the complex security architecture of LLMs, and we'll explain best practices for Integrating AI with existing security tools while meeting compliance and governance patterns. We will also demonstrate practical security implementations like RBAC, Auth, etc., by using open-source cloud native tech stacks like k8s API gateway, Istio service mesh, AI gateway, and Kubernetes.
The cool part about using cloud-native patterns is that they're built for today's dynamic business environment. You can scale security up or down as needed, and everything works together smoothly. It's like having a well-coordinated security team that adapts to different situations.
One of the biggest concerns is protecting sensitive data. You don't want your company's secrets leaking through AI interactions. That's where prompt injection protection comes in - it's like having a filter that catches suspicious requests before they can do any harm.
Compliance is another crucial piece. Different industries have different rules about data handling and AI use. The zero-trust architecture helps you stay within these guidelines while still getting the most out of your AI tools.
The best part? You can implement all this without sacrificing performance. Modern security tools are smart enough to protect your systems while keeping things running smoothly.
Securing AI isn't just about protecting against threats - it's about building trust. When employees and customers know their data is safe, they're more confident using AI-powered tools, which leads to better adoption and results.
The topic is interesting, but in my humble opinion, not particularly relevant to the IndiaFOSS main track. In my understanding, the conference rarely highlights purely conceptual/architectural talks.
Thank you for submitting your proposal for IndiaFOSS 2025. Your submission was well-received and progressed to our final review stages.
Unfortunately, due to the high volume of excellent proposals this year, we were unable to select your talk for the final program. We appreciate the effort you put into your submission and encourage you to apply again for future events.