Skip to content

Creating an LLM Chatbot

This demo showcases Taipy's ability to enable end-users to run inference using LLMs. Here, we use GPT-3 to create a chatbot and display the conversation in an interactive chat interface.

Try it live Get it on GitHub

Understanding the Application

This application allows the user to chat with GPT-3 by sending its input to the OpenAI API and returning the conversation in a chat window. The user can also return to a previous conversation and continue it.


A tutorial on how to write this application and similar LLM inference applications is available here.