Artificial Intelligence is shaping the future of web applications. One of the most powerful tools in this space is ChatGPT, developed by OpenAI. In this tutorial, you’ll learn how to integrate ChatGPT with Django to build your AI-powered web app.
We will use OpenAI’s API and Django to create a web app where users can enter a prompt and get a response from ChatGPT.
What Is the ChatGPT API?
The ChatGPT API is a cloud-based REST API provided by OpenAI. It allows your app or website to send messages to the model and receive smart, human-like responses.
It’s part of OpenAI’s Chat Completions API, designed specifically for multi-turn conversation, like a chatbot.
Prerequisites
- Python 3.10+
- Django 5.x
- OpenAI API key
- Basic knowledge of HTML/CSS and Django views
Step 1: Get Your OpenAI API Key
- Go to https://platform.openai.com/signup and create an account.
- Navigate to API keys and create a new key.
- Copy the key securely. We’ll use it in Django.
- Verify your account and add a payment to use API key
Step 2: Create a New Django Project
django-admin startproject djangoGpt
cd djangoGpt
python manage.py startapp chatbot
Now we will create a model to store chats:
from django.db import models
class Chat(models.Model):
user_message = models.TextField()
bot_response = models.TextField()
timestamp = models.DateTimeField(auto_now_add=True)
def __str__(self):
return f"{self.timestamp}: {self.user_message[:30]}"
Step 3: Create View
1. You Send a Request
Your backend Django sends a POST request to this endpoint:
https://api.openai.com/v1/chat/completions
This request includes:
- Your API key (for authentication)
- The model name (e.g.,
gpt-3.5-turbo
,gpt-4
) - A list of messages in a chat format (with roles like “user”, “assistant”, and “system”)
Here’s a basic example using Python:
import openai
client = OpenAI(api_key=settings.OPENAI_API_KEY)
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": user_input} ]
)
2. OpenAI Processes It
OpenAI’s servers receive your input and pass it through a transformer-based language model trained on billions of tokens.
The model understands:
- Your question
- The context of the conversation
- Any system instructions
It generates a predicted response, which is context-aware and intelligent.
3. You Get a Smart Response
The API returns a JSON response that looks like this:
{
"choices": [
{
"message": {
"role": "user",
"content": "The capital of France is Paris."
}
}
]
}
What Does chat_view
Do?
The chat_view
Function serves two main purposes:
- Receives user input and sends it to OpenAI (when the request is POST).
- Fetches recent chat history for display (when the request is GET).
Let’s examine it in parts and understand how it works together.
We use OpenAI’s chat.completions
endpoint to get a response from a model like gpt-3.5-turbo
Show chat history in the frontend
In your chat.html
You can fetch and loop through previous chats:
{% extends 'base.html' %}
{% block content%}
<div class="container mt-5">
<h3 class="text-center mb-4">Django Chatbot</h3>
<div class="chat-box mb-4" id="chat-container">
{% for chat in chats %}
<div class="bot-msg">
<div class="message"><strong>Bot:</strong> {{ chat.bot_response }}</div>
</div>
<div class="user-msg">
<div class="message"><strong>You:</strong> {{ chat.user_message }}</div>
</div>
{% endfor %}
</div>
<form id="chat-form" method="post">
{% csrf_token %}
<div class="input-group">
<input type="text" class="form-control" name="message" id="message-input" placeholder="Type your message..." required>
<button class="btn btn-primary" type="submit">Send</button>
</div>
</form>
</div>
<script>
const form = document.getElementById("chat-form");
const messageInput = document.getElementById("message-input");
const chatContainer = document.getElementById("chat-container");
form.addEventListener("submit", async function(e) {
e.preventDefault();
const userMessage = messageInput.value.trim();
if (!userMessage) return;
const csrfToken = document.querySelector('[name=csrfmiddlewaretoken]').value;
// Show user message
chatContainer.innerHTML += `
<div class="user-msg">
<div class="message"><strong>You:</strong> ${userMessage}</div>
</div>
`;
chatContainer.scrollTop = chatContainer.scrollHeight;
messageInput.value = "";
// Send request
const response = await fetch("", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
"X-CSRFToken": csrfToken,
},
body: new URLSearchParams({ message: userMessage }),
});
const data = await response.json();
// Show bot response
chatContainer.innerHTML += `
<div class="bot-msg">
<div class="message"><strong>Bot:</strong> ${data.response}</div>
</div>
`;
chatContainer.scrollTop = chatContainer.scrollHeight;
});
</script>
{%endblock%}
Create a .env file in the project dir and add your api key:
Do not expose your api key in production
OPENAI_API_KEY="your api key"
If you have any doubts, feel free to comment below this post or contact me