RAG ChatBot
This demo showcases Taipy's ability to enable end-users to run inference using LLMs. Here, we use Langchain to query Mistral's LLM model hosted on HuggingFace to ask questions about PDF files using RAG.
Understanding the Application¶
This application uses RAG (Retrieval-Augmented Generation) to answer questions about PDF files. The user can upload his PDF files in a specific folder and ask questions about them on the Taipy interface. This project uses Langchain to query Mistral's LLM model hosted on HuggingFace, but this can be easily adapted to other models or APIs.
A tutorial on how to write similar LLM inference applications is available here.