Talk
Beginner

Own Your LLM: The State of Open Source LLMs and How to Run and Fine-Tune

Withdrawn

Session Description

Large Language Models are no longer limited to closed platforms or proprietary APIs. The open-weight ecosystem has evolved to a point where anyone can build, fine-tune, and deploy their own model while maintaining full control over data and infrastructure. In the current landscape, the rise of large-scale LLM distribution in India through platforms like Gemini Student Offer, Gemini Jio Offer, OpenAI offering ChatGPT Go for Free, and Perplexity Airtel partnership shows how centralized systems increasingly use user data to train and refine their models. This trend highlights the growing importance of owning and operating local, privacy-focused systems that keep data under user control while offering the similar capabilities.

This talk explores the current state of open-weight LLMs and how to use them effectively in real-world applications. It covers major open releases from Meta (Llama 4), Microsoft (Phi-4), Google (Gemma 3), IBM (Granite 4.0), Alibaba (Qwen), and Zhipu (GLM).

The session introduces a hands-on view of running models locally using frameworks such as vLLM, Llama.cpp, and Ollama. These tools make it possible to run powerful open-weight models efficiently on a variety of hardware, including CPUs, GPUs, and Apple MLX devices.

Tooling will be a strong focus, with examples of how developers can use open tools like OpenCode and RooCode to improve developer productivity, and leverage Unsloth for training and fine-tuning models for specific use cases without closed dependencies. The talk will also cover tool calling and how integrating LLMs with external tools, APIs, or automation workflows (through n8n) can enrich data and enable more capable, context-aware systems.

A segment of the talk will focus on benchmarking and evaluation, examining how open-weight models perform against closed ones and how the performance gap has narrowed to a level where open models can now handle most daily tasks effectively. The session will also discuss small language models (SLMs) and their growing importance for privacy-preserving, efficient, and edge-focused deployments.

The goal is to give attendees a clear technical understanding of the open-weight LLM ecosystem, associated tooling, and fine-tuning workflows. By the end of the talk, the audience will know how to set up an open LLM environment locally, fine-tune it for their data, integrate it with automation pipelines, and evaluate it systematically. The session is designed for engineers and practitioners who want to move beyond closed APIs and build their own privacy-preserving, open, and reproducible air-gapped AI systems.

Key Takeaways

Attendees will gain a clear understanding of the open-weight LLM landscape and how it is closing the gap with proprietary systems, they will be able to own their data.

The talk will help them learn how to run open-weight models locally, and how to fine-tune them for specific use cases.

It will also cover how developer tooling such as OpenCode, RooCode, and n8n can enhance productivity, enable tool calling, and integrate LLMs applications.

By the end, participants will know how to build, adapt, and operate privacy-preserving AI systems entirely with open weights and open tooling.

References

Session Categories

Knowledge Commons (Open Hardware, Open Science, Open Data etc.)
Tutorial about using a FOSS project
Community

Speakers

Anas Khan
Software Engineer HackerRank
https://www.linkedin.com/in/anxkhn
Anas Khan

Reviews

0 %
Approvability
0
Approvals
2
Rejections
0
Not Sure

"Open source LLMs" is still a grey area. Well written proposal but I can't find anything novel here that has not been covered in so many online resources

Reviewer #1
Rejected

Seems like yet another generic "open source AI" talk

Reviewer #2
Rejected