Skip to Main Content
Lightning Talk Intermediate Na

Beyond the API Key: Real AI Privacy & Security for LLM Systems

Rejected
Session Description

Indian startups are rapidly shipping LLM-powered features—often by simply plugging in an API and pushing user data into prompts. This approach creates serious privacy, security, and reliability risks, especially in regulated domains like MedTech and FinTech.

This talk focuses on actual AI privacy and safety issues in real LLM systems, not policy slides. It breaks down how data leaks happen, why hallucinations are a security problem (not just a UX bug), and how prompt injection can silently compromise systems.

Using real system patterns seen in production startups, the talk explains:

  • Where sensitive data leaks in LLM pipelines

  • Why “we don’t store prompts” is a myth

  • How hallucinations and prompt injection become data exfiltration vectors

  • What open-source and architectural controls startups must adopt before scaling

This session is a wake-up call for Indian startups building AI products without understanding the underlying risks.

Most Indian AI startups today:

  • Treat LLMs as black-box APIs

  • Push raw user data into prompts

  • Have no threat model for AI systems

  • Confuse “model accuracy” with “system safety”

This creates:

  • Silent PII/PHI leakage

  • Compliance violations (DPDP Act, HIPAA-style requirements)

  • Hallucinated outputs used as source-of-truth

  • Prompt injection attacks that bypass business logic

Why This Talk is Important ?

  • DPDP Act + upcoming AI regulations

  • Cost pressure pushing startups toward shortcuts

  • Increasing use of AI in health, finance, and governance

  • Lack of practical AI security education in the ecosystem

Key Takeaways
  • AI privacy is a system-design problem, not a legal checkbox
    Real privacy failures happen in prompts, logs, embeddings, retries, eval pipelines, and fine-tuning—not just in databases.

  • Hallucinations are a security and safety vulnerability
    In domains like MedTech and FinTech, hallucinated outputs can cause harm, leak context, and break trust boundaries.

  • Prompt injection is an architectural flaw, not an edge case
    User inputs can override system instructions, abuse tools, and extract sensitive data if trust boundaries are not explicitly designed.

  • “We don’t store prompts” is a myth in production systems
    Prompts and outputs are often persisted indirectly through logs, monitoring, vector stores, and analytics pipelines.

  • RAG and guardrails do not guarantee safety
    Retrieval and prompt rules reduce risk but do not eliminate hallucinations, leakage, or injection attacks.

  • Open-source LLMs enable real privacy and auditability
    Self-hosted and OSS models provide control over data flow, inference, and compliance that closed APIs cannot.

  • Data minimization before prompting is mandatory
    Never send raw PII/PHI to an LLM—redaction, transformation, and scoping must happen upstream.

  • AI systems need explicit trust boundaries
    LLM outputs must never be treated as source-of-truth without validation, confidence checks, and human oversight.

  • Indian startups must adopt AI threat modeling early
    Waiting until scale guarantees privacy incidents, regulatory exposure, and loss of user trust.

References

Session Categories

Technology architecture
Engineering practice - productivity, debugging
Community
Talk License: Na

Speakers

Akshay Kumar U
Founder & CTO | ABHYUDAYA SOFTECH LLP

Akshay Kumar U is the Founder and CTO of Abhyudaya Softech, a technology company building AI-enabled, privacy-sensitive software solutions and scalable applications. With over 5+ years of hands-on experience in full-stack engineering, applied AI systems, and open-source technologies, Akshay has delivered real-world products spanning MedTech, EdTech, and intelligent backend platforms.

He has experience with Flutter, Firebase, native Android, and deep learning systems, and regularly shares his knowledge through community tech talks, workshops, and mentorship. Akshay has spoken at major developer events including DevFest and Flutter Forward Extended across multiple cities.

His current focus is on designing privacy-aware AI systems—especially around large language models (LLMs), responsible model deployment, and safety engineering—so that emerging Indian startups can build AI products that are secure, compliant, and user-trust centric.

https://www.linkedin.com/in/akshay-kumar-u-a45a1316b/
Akshay Kumar U

Reviews

Not sure how this fits at a FOSS conference. Please go through the proposal guidelines

Reviewer #1 Rejected