Lightning Talk
Beginner

From Weights to Weaknesses: Securing the Open-Source LLM Stack

Review Pending

As open-source LLMs become increasingly accessible and integrated into real-world systems, their security implications are often overlooked in the race to build. This talk explores the full stack of open-source LLM usage from pre-trained weights to deployment pipelines highlighting the risks, attack surfaces, and threat models unique to this ecosystem. We’ll examine real-world vulnerabilities, prompt-based exploits, supply chain threats, and insecure model-serving practices. Most importantly, you'll leave with concrete mitigation plans and an understanding of why open-source LLMs need to be secured not because it's a good idea, but because it needs to be.

Whether you're an open-source developer, ML engineer, or security specialist, this session will make you rethink what "secure by default" means in the age of generative AI.

1. Open-source LLMs pose novel security threats

In contrast to closed models, open-source LLMs reveal their full stack—weights, code, pipelines—leaving them more accessible to audit and attack.

2. Prompt-based attacks are possible and exploitable

Prompt injection, jailbreaking, and data exfiltration through outputs are actual attacks that can breach systems and spill confidential data.

3. Supply chain risks frequently invisible but essential

Pretrained models, fine-tuning data sets, and Python dependencies all potentially have hidden threats unless they are checked and validated.

4. Deployment is the new risk zone

Self-hosting LLMs without isolation, access control, or output filtering exposes attack surfaces.

5. Mitigation is not trivial, but it is possible

Solutions such as sandboxing, red-teaming, model provenance tracing, and structured prompts can substantially lower risk.

6. Security has to be a first-class consideration in open-source AI development

The "move fast and prompt things" ethos must reverse—security can't be an afterthought when adopting LLMs.

Other
AI Security
LLM security

0 %
Approvability
0
Approvals
1
Rejections
0
Not Sure

This isn't a good fit for the open-data devroom.

Reviewer #1
Rejected