Here we will discuss about the strategies to get Local LLMs working and using Local AI to build beautiful functional products. The current stack include building simple tauri / electron apps with ollama to perform inference locally and get outputs.
We will discuss voice assistant, real time Chatbots and build/ show a quick demo on the same. We will be using a tiny qwen0.5 B / moondream 0.5 B and other local Models that you can use daily, model to play around with.