Skip to content

AI Chat

Configuring the Repository for AI chat:

the current example includes RAG using Langchain and Qdrant as a vectorStore, to configure the repository we’ll need to add the following environment variables.

PS: you can use any LLM or Vector Database supported by langchain

Terminal window
QDRANT_API_KEY="QDRANT_API_KEY"
QDRANT_HOST="QDRANT_HOST"
QDRNAT_COLLECTION_NAME="QDRNAT_COLLECTION_NAME"
OPENAI_API_KEY="OPENAI_API_KEY"

Running the server

Since we would be using Server Sent events across an asynchronous connection hence we’ll need to run the django using ASGI instead of the normal WSGI, hence to run the server we can use the following command :

Terminal window
gunicorn -c gunicorn.conf.py thedevstarter_backend.asgi

Configuring the Nginx

If you’re using Django behind a reverse proxy like Nginx you’ll need to configure the cache for supporting the SERVER SENT EVENTs inOrder to use the chat feature,

your Nginx should look something like this

server{
listen 80;
server_name backend.thedevstarter.com;
client_max_body_size 5M;
location = /favicon.ico { access_log off; log_not_found off; }
location /{
include proxy_params;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
proxy_pass http://0.0.0.0:8000;
}
}