In this talk, we’ll see how LCMs represent a major evolution from Large Language Models. While LLMs predict text by chaining together tokens, LCMs operate at a much deeper level—mapping and reasoning with abstract concepts like regret, curiosity, morality, and purpose. I’ll walk through how these models decompose high-level human ideas, show how tools like Meta’s Sonar test conceptual depth, and demonstrate how LCMs can summarize, generate, and even empathize.
The objective is to introduce, explain, and demonstrate how LCMs go beyond traditional language models by understanding and reasoning about human concepts—such as emotions, intentions, causality, and abstract ideas—paving the way for more intelligent, intuitive, and human-aligned AI systems.
Slides:
Code:
Recent advances in large language models (LLMs) have demonstrated impressive abilities in text generation, summarization, and translation. But, these models often struggle to move beyond surface-level understanding. In this talk, we introduce Large Concept Models (LCMs)—a new class of models developed by Meta AI, designed to go beyond predicting words and instead reason with high-level human concepts like emotion, intent, causality, morality, and abstract ideas. LCMs represent a shift from token-based prediction to concept-based reasoning. We will explore how LCMs internally decompose complex sentences, group related meanings, and maintain stable conceptual representations across different inputs. Meta's Sonar will be used to demonstrate how concept understanding is used to detect nearness and predict sentence output.
Proposal agenda
1. Intorudctions
2. LLM flows description
3. What is LCM
4. More about LCM
5. Definition
6. Demonstration of SONAR to predict next sentence
The target audience includes AI/ML developers, NLP researchers, cognitive scientists, product designers, educators, and curious students.
Interesting topic but we are having a hard time deciding which AI talks to approve, there are many.
In my humble opinion, the proposal doesn't fit into the IndiaFOSS agenda/track - the proposal isn't meant for a general FOSS audience, and all but one takeaways from the talk are conceptual in nature and unrelated to FOSS.
The topic is interesting but I don't immediately see a useful open source project or takeaway for the audience. This could be brought up in another format as a discussion topic to a more niche AI-focused audience and I see it being interesting there. Perhaps a BoF?
Reviewers noted that the proposal's content was not relevant to the IndiaFOSS's agenda. While the topic was found to be interesting, reviewers felt that the key takeaways were too conceptual and lacked a direct connection to a specific open-source project or practical FOSS contribution. For future submissions, we recommend that you clearly articulate the FOSS angle of your talk and provide a more direct and actionable takeaway for the audience, perhaps by focusing on a specific open-source project or a problem you've solved using FOSS tools.