ChatBuddy is an AI-powered chatbot using LangChain, LLAMA3, and Streamlit for intelligent and real-time query responses.
ChatBuddy is an interactive AI chatbot powered by LangChain and LLAMA3, built using Streamlit for a user-friendly experience. This project leverages the power of large language models (LLMs) to provide intelligent responses to user queries. It integrates Ollama as the LLM provider and ensures efficient interaction through LangChain's prompt management system.
β
Conversational AI β ChatBuddy acts as an AI assistant, responding to user queries in a structured and meaningful way.
β
LLAMA3 Integration β Uses the LLAMA3 model via Ollama to generate natural language responses.
β
LangChain Framework β Implements LangChain's ChatPromptTemplate for better prompt management and structured responses.
β
Streamlit UI β Provides a clean and intuitive interface for user interaction.
β
Dynamic Input Handling β Accepts user queries and generates responses in real time.
β
Environment Configuration β Uses dotenv to securely load the LangChain API key, ensuring proper authentication and security.
β
Error Handling β Includes exception handling to notify users about issues such as missing API keys or incorrect installations.
Python β Core programming language.
LangChain β Framework for integrating LLMs effectively.
LLAMA3 (via Ollama) β The powerful AI model for generating responses.
Streamlit β UI framework for creating interactive web applications.
Dotenv β For managing environment variables securely.
Load API Key: The app retrieves the LangChain API key from a .env file.
Initialize LangChain: A structured prompt template is created using ChatPromptTemplate.
Set Up the LLM: The LLAMA3 model is initialized using Ollama().
User Input: Users enter a question into the Streamlit text input field.
Process Query: The input is passed through the LangChain pipeline, which formats it and sends it to the model.
Generate Response: The model produces an answer, which is then displayed on the Streamlit interface.
Error Handling: If any issue arises (e.g., missing API key or Ollama not running), the app notifies the user.
πΉ Fast & Efficient: Uses lightweight models optimized for performance.
πΉ Easy to Deploy: Can be run locally or hosted on a cloud platform.
πΉ Beginner-Friendly: Simple and structured code, making it easy to understand and extend.
πΉ Secure: Manages API keys securely using dotenv.
πΈ Multi-Model Support: Allow users to select different AI models dynamically.
πΈ Voice Input & Output: Integrate speech recognition for a hands-free experience.
πΈ Persistent Chat History: Store and retrieve previous conversations.
πΈ Theming & Customization: Provide options for light/dark mode UI.