top of page
 

13th of June 2024

Understanding engagement rate: factors and solutions



When deploying a conversational AI, we aim to have as many conversations as possible. First, it will impact ROI positively, but furthermore, it will also enhance our insights and the speed of improvement. The first step in increasing volume is understanding engagement rate, or alternatively, widget open-to-conversion rate; a metric that tells us how many customers chose to have a conversation with our product opposing to just opening up the chat window. In this article we will examine the factors driving it, and whether we can improve it easily.


Introducing Engagement Rate

Engagement rate is the rate of customers choosing to chat with an AI compared to the customers who just open up the widget. It can be calculated by dividing the number of chats by the number of times customers have opened up a new window.


Engagement rate = Number of chats / Number of widget opens

Methodology

We wanted to investigate three potential factors for engagement rate:

  1. The industry sector (e.g. retail vs public sector)

  2. The webpage the widget is on (e.g. homepage vs rest)

  3. How we disclose that our AI is not a human agent


For this study, we have selected 10-20 clients from our data history, grouping each of these into one of the pre-defined categories driven by the factors we are investigating. We have included all the available historic data, meaning a sample of 1-2 million conversations.


To determine whether an attribute is a significant factor in driving engagement rate, we have have conducted a t-test.


Research

Factor 1 - Industry Sector

The most natural split we can have is to separate public sector and private sector. Although this might make sense in economics, it should hardly impact customer behaviour, given that often both commercial enterprises and public services can cover the same range of questions.

For instance, customers might inquire about delivery issues or wanting a refund in the commercial sector, but these questions can equally occur for the public services, for example not having a bin delivered from a council, or wanting refund on a utilities bill.

Public sector (Average ER)

53.95%

Private sector (Average ER)

39.99%

P-value

0.2089 (not significant)

Although our analysis showed a difference when it came to the averages between these two sectors, their link isn't considered statistically significant.


On the other hand, if we stray away from categorising clients from a business standpoint and we investigate the type of conversations we get, we can find more promising results. Although at times it's hard to differentiate between the two concepts, we can fairly comfortably distinguish between clients based on whether their customer base is mostly contacting them to (1) clarify, ask questions and (2) report an issue.

Q&A (Average ER)

37.31%

Problem-solving (Average ER)

61.68%

P-value

0.0263 (significant)

Our analysis shows that services that aim to solve an issue rather than answering questions are more successful at converting customers. This is somewhat expected: if a customer thinks that their issue is not urgent, they might prefer browsing the website, but for issues such as reporting domestic violence, payment problems, not having electricity or water, customers might realise the need of involving a 3rd party.


Factor 2 - Webpage

Our suspicion that the webpage the customer is on might influence engagement rate is a rather rational one. A chat widget on a specialised page carries a different message compared to the contact page. First, some customers might not understand what the chat icon is outside of the context of the contact us page. Second, they might prefer other contact options, whereas on the contact page where all other alternatives are laid out, clicks on the chat icon are expected more intentional.


When comparing how the average ER changes across various webpages, we find that the contact pages almost always have a larger value than the rest of the website. This confirms our previous assumption above with a statistically extremely significant (p=0.0003) link between the two groups.

Contact Us (Average ER)

75.81%

Other pages (Average ER)

43.62%

P-value

0.0003 (extremely significant)


Factor 3 - Disclosure of AI identity

The impact of AI chatbot identity disclosure on customer behavior is a topic of interest. Previous studies have yielded varying results. For instance, Luo et al. (2019) found that revealing chatbot identity before interactions reduced purchase rates and call duration. Ishowo-Oloko et al. (2019) observed that disclosing non-human chatbot identity negatively affected human-machine cooperation efficiency. However, Mozafari et al. (2020) suggested that the consequences of disclosure depend on service criticality.


Our research, consistent with van der Goot (2024), found no statistically significant link between engagement rate (ER) and AI identity disclosure. Both chatbot name and introduction message had no significant effect on ER (p = 0.4923 and p = 0.9206, respectively).

Category

Chatbot name

Chatbot introduction message

Mentions AI/VA

55.30%

48.92%

Does not mention AI/VA

47.39%

42.36%

P-value

0.4923 (not significant)

0.9206 (not significant)

Conclusion

Our research sheds light on critical factors influencing engagement rates in AI-driven customer interactions, by examining industry sectors, webpage context, and the disclosure of AI identity. While industry sectors alone did not significantly impact engagement rates, the type of conversations—whether focused on issue resolution or answering questions—played a crucial role. Furthermore, the placement of chat widgets on specialised pages, such as contact pages, consistently led to higher engagement rates. The disclosure of AI identity, however, did not show a statistically significant link with engagement rates.

As we continue to explore the dynamics of human-AI interactions, understanding these nuances will be essential for optimising engagement and driving volume.


Author

Robert Flick

 

References

1

Luo, X., Tong, S., Fang, Z. and Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science, 38(6), pp.937–947 (doi:https://doi.org/10.1287/mksc.2019.1192).

2

Ishowo-Oloko, F., Bonnefon, J.-F., Soroye, Z., Crandall, J., Rahwan, I., and Rahwan, T. (2019). Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence (1:11), pp. 517-521 (doi: 10.1038/s42256-019-0113-5).

3

Mozafari N, Weiger W, Hammerschmidt M. (2020). The chatbot disclosure dilemma: desirable and undesirable effects of disclosing the non-human identity of chatbots. In: Proceedings of the 41st international conference on information systems

4

van der Goot, M.J., Koubayová, N. & van Reijmersdal, E.A. (2024). Understanding users’ responses to disclosed vs. undisclosed customer service chatbots: a mixed methods study. AI & Soc. (doi:https://doi.org/10.1007/s00146-023-01818-7).

 

Comments


bottom of page