Open-source small language models (SLMs) like DeepSeek, Phi-3, and Mistral are game-changers in AI accessibility. But how do you get them to work specifically for your use case? Whether you're building a chatbot, a coding assistant, or a niche domain expert, finetuning these models can make a world of difference.
In this session, we’ll demystify the process of finetuning SLMs on consumer hardware and show how to optimize them for real-world applications.
We’ll discuss:
This talk is designed for AI enthusiasts, developers, and researchers who want to take control of open-source AI models without burning a hole in their cloud credits. Expect a fast-paced, engaging session with a live demo, practical insights, and a few AI-generated surprises!
By the end of this talk, attendees will know:
✅ How to pick the right open-source model for their task
✅ The simplest ways to fine-tune a model with minimal compute
✅ How to evaluate and deploy their custom-tuned models effectively
Target Audience: College students and working professionals interested in AI, open-source, and machine learning.