Talk
Intermediate

Beyond Words: Exploring the Minds of Large Concept Models

Rejected

Session Description

In this talk, we’ll see how LCMs represent a major evolution from Large Language Models. While LLMs predict text by chaining together tokens, LCMs operate at a much deeper level—mapping and reasoning with abstract concepts like regret, curiosity, morality, and purpose. I’ll walk through how these models decompose high-level human ideas, show how tools like Meta’s Sonar test conceptual depth, and demonstrate how LCMs can summarize, generate, and even empathize.

The objective is to introduce, explain, and demonstrate how LCMs go beyond traditional language models by understanding and reasoning about human concepts—such as emotions, intentions, causality, and abstract ideas—paving the way for more intelligent, intuitive, and human-aligned AI systems.

Slides:
https://docs.google.com/presentation/d/1gJCDgS_SmJy3M57f7fDvRCw5zqUgH5ldrfAB-XfVOmQ/edit?usp=drive_link

Code:
https://github.com/virajsharma2000/v-lcm-demo

Key Takeaways

  • LCMs vs. LLMs
    Understand the fundamental difference between Large Concept Models (LCMs) and Large Language Models (LLMs)—shifting from word-level prediction to concept-level reasoning.

  • Core Capabilities of LCMs
    Learn how LCMs can understand and reason about abstract human concepts such as emotion, intent, morality, causality, and goals.

  • Multimodal and Language-Agnostic Design
    Discover how LCMs work across languages and modalities (text, images, etc.), enabling broader and more intuitive AI applications.

  • Concept Extraction Process
    Get an inside look at how LCMs break down inputs into concepts, identify emotions or causes, and generalize across scenarios.

  • Evaluation Using Tools like Sonar
    See how conceptual understanding is measured using benchmarks and tools like Meta’s Sonar.

  • Real-World Applications
    Explore how LCMs can be applied to storytelling, summarization, emotional analysis, education, accessibility, and more.

  • Future of Human-Aligned AI
    Reflect on how LCMs bring us closer to building AI that thinks more like humans—not just fluently, but thoughtfully.

References

Session Categories

Knowledge Commons (Open Hardware, Open Science, Open Data etc.)
Tutorial about using a FOSS project

Speakers

Viraj Sharma
Student Presidium Indirapuram Delhi
https://sharmaviraj.com
Viraj Sharma

Reviews

100 %
Approvability
1
Approvals
0
Rejections
0
Not Sure

If you can explain adequately and demo, I'm sure it'll be a good talk

Reviewer #1
Approved