Workshop
Beginner

Running LLaMA Locally: Unlocking Offline LLM Power on Your Laptop

Review Pending

Abstract:

Imagine running a powerful Large Language Model like Meta’s LLaMA completely offline — no internet, no cloud, just your laptop. In this session, we'll demystify how to run LLaMA models locally, turning your laptop into a fully autonomous AI powerhouse.

We'll cover model setup, optimization for low-resource hardware, and how to expose it via an API for use in your own apps — including how to run a chatbot on React Native without cloud dependencies.

Whether you're a developer exploring privacy-first AI or an enthusiast building offline assistants, this talk will give you the roadmap to deploy, serve, and chat with LLaMA offline.

Key Takeaways:

1] What is LLaMA and why run it locally?

2] Model formats: GGUF, quantization, and performance benchmarks

3] Step-by-step: Running LLaMA on Mac/Windows/Linux using llama.cpp

4] Serving LLaMA as an API with Node.js

5] Building an offline chatbot in React Native

6] Tips for memory-efficient inference (no GPU required!)

7] Packaging and deploying the solution to desktop/mobile stores

Knowledge Commons (Open Hardware, Open Science, Open Data etc.)
Technology / FOSS licenses, policy
Technology architecture
Which track are you applying for?
Main track


0 %
Approvability
0
Approvals
2
Rejections
1
Not Sure

This seems like a fairly simple, straightforward talk that I'm not sure many people will benefit from. Most people either already want to self-host and are, and if they don't, they have no reason to, since many of the cloud-based tools are free.

Reviewer #1
Not Sure

Not enough content to showcase. Someone can simply follow an online tutorial.

Reviewer #2
Rejected

A dedicated workshop on some kinds of optimization might have added value, but this proposal is very generic.

This is quite commonly being done. While many could benefit from this, it's also true that a lot of this material is already well covered online in a large number of tutorials. The novelty factor for this is low, as people with interest are likely to look it up online and do it.

Reviewer #3
Rejected