AI Chat
Configuring the Repository for AI chat:
the current example includes RAG using Langchain and Qdrant as a vectorStore, to configure the repository we’ll need to add the following environment variables.
PS: you can use any LLM or Vector Database supported by langchain
Running the server
Since we would be using Server Sent events across an asynchronous connection hence we’ll need to run the django using ASGI instead of the normal WSGI, hence to run the server we can use the following command :
Configuring the Nginx
If you’re using Django behind a reverse proxy like Nginx you’ll need to configure the cache for supporting the SERVER SENT EVENTs inOrder to use the chat feature,
your Nginx should look something like this