AI agents are like gifted toddlers: scarily smart, wildly curious, and absolutely lacking in basic safety awareness. Left unchecked, they’ll click strange links, access things they shouldn’t, and potentially run wild in your infrastructure. And while we’re all excited about the potential of autonomous agents in cloud-native workflows, one truth is clear: they need a security blanket — soft, layered, and thoughtfully designed.
In this talk, we’ll explore what it takes to build a cozy but resilient security perimeter around AI agents operating in production environments. These aren't static models serving predictions; we're talking action-oriented, API-hitting, shell-executing, multi-step agents that can be both magical and malicious — often at the same time.
We’ll walk through the latest projects in open-source ecosystem like OpenLLM, Inference Gateway and look at how well they blend-in together. We will also look at patterns like tool whitelisting, rate-limiting, and context scoping that help AI agents stay useful without becoming unhinged, in the end see What a “security blanket” actually looks like — from fine-grained sandboxing and identity isolation to audit trails and execution boundaries
The audience will learn on how to safely wrap AI agents with guardrails, sandboxes, and scoped permissions.
Not finding much FOSS relevance
Not relevant to a FOSS conference. Please go through the guidelines - https://forum.fossunited.org/t/talk-proposal-guidelines-for-a-foss-conference-meetup/1923
The reviewers felt that while the topic is interesting, it lacks a strong connection to FOSS. The proposal was deemed not relevant to the conference and the reviewers suggested you take a closer look at the proposal guidelines for future submissions.