I’m working on the custom chatbot using ChatGPT. Then the typing indicator during the chatbot processing is a little bit complicated. So I’ve decided to write this article for future me.
First of all
I’m not an ML engineer or AI engineer. I will talk about a chatbot featured by ChatGPT but I’m unfamiliar with it. I just implemented the APP interface and backend for typing indicators. I’ll mainly talk about its AWS infrastructure.
Background
I’m working on an interactive chatbot featured by ChatGPT. The ChatGPT takes time to generate answers from the conversation. Moreover, it fails in the middle of generating at the worst case. So I had to deal with the situation. Then I thought the common practice would be the typing indicator because ChatGPT, Bing, and things like that use the indicator during the processing and the indicator is gone when the process ends.
The implementation for the backend like sending messages or showing a typing indicator is not very complicated but the infrastructure is not straightforward. We have some things to be careful like sending multiple messages from the chatbot, stopping the typing indicator on the chatbot’s failure, and so on.
What we did
We built our infrastructure based on the following conversation and changed some for our requirements.
This is our green case flow. The flow from 1 to 3 is to receive a message and store it in a queue. I think it’s obvious and the same as the reference. I’ll explain a bit more the flow after that.
- The flow from 5 to 7 indicates to send an event to start the typing indicator
- The flow from 8 to 10 indicates to send messages generated by the chatbot
- The flow from 11 to 13 indicates to send an event to stop the typing indicator
There is room to discuss whether we should use a different queue for starting and stopping the indicator.
These are our two red case flows. The first one is the same as the green one until step 7. We assume something wrong with our chatbot featured by ChatGPT. In that case, the chatbot lambda sends an event to stop the typing indicator, which is the flow from 8 to 10.
The other one is the same as the green one until step 3. We assume something wrong with our chatbot lambda. In that case, we send a message to the backend from the DLQ of the responder.
That’s it!