Can Customer Support Depend on Context-aware AI to be More Human-like?
No bot is perfect, says Guillaume Laporte, Chief AI Officer of Foundever. Businesses integrating genAI tools to aid high customer support queries must set realistic expectations and not pretend the bot can do everything.
The first thing business leaders should add in their SOPs for the human-AI agent tag team is to prepare for seamless handover to a human when the AI gets stuck.
While we have seen significant decline in leadership inertia towards adoption of AI in enterprise, implementation is still lagging. To move a step further, customer service has been identified as a meaningful use case to reduce operational costs and drive higher customer satisfaction.
So where is the gap? Unrealistic expectations could be to blame – looking at AI as a magic bullet will leave leaders disappointed. Another reason for the sluggishness in adoption could be due to inadequate understanding of training needed for staff to work alongside AI agents. Be open to pivot, flexible enough to make quick moves.
Real-World, Real Challenges
When a leading hospitality player was met with a common industry challenge – high volumes of support calls during demanding seasons – they needed a way to work with a lean team without overburdening resources or taking on more operational costs.
After partnering with Foundever, a global customer experience (CX) technology solution, the hospitality player was able to showcase an NPS of 61 – performing above industry standards. Further, it was able to see a commendable 12% increase in solves per day, complemented by a 23% reduction in work handle time in the French and Portuguese markets.
In this particular case, the technology partner was able to leverage continuous improvement methodologies and variation-based management, critical areas impacting the customer experience were identified and fine-tuned, enhancing customer loyalty.
Foundever stepped in for recruitment and training expertise to ensure the optimal number of trained agents equipped to handle seasonal fluctuations. Its AI trainer solution — a real-life conversational simulation specifically designed to improve proficiency — helped address the solves per day (SPD) and work handle time (WHT).
Generative artificial intelligence (AI) promises a host of transformative benefits including delivering responses that feel natural and intuitive, ensuring smoother and more engaging interactions. “Brands can enhance customer interactions by training Generative AI models on their historical conversation data, using techniques like fine-tuning or RAG (retrieval augmented generation). This allows the AI model to understand common queries and generate more accurate and contextually relevant responses. Continuous learning from ongoing interactions ensures the AI model adapts to evolving customer needs,” said Guillaume Laporte, Chief AI Officer of Foundever.
Foundever has developed conversational AI solutions, including human-like voicebots and chatbots, powered by Generative AI and advanced conversational intelligence. By leveraging advanced Large Language Models (LLMs), context-aware AI, and cutting-edge techniques like reinforcement learning, its tools are able to understand customer intent and deliver responses that feel natural .
Laporte suggests business leaders put certain guardrails in place to ensure the system stays effective, ethical, and compliant.
First, you’ve got accuracy and relevance – the bot has to give the right information. This means keeping the AI well-trained on updated and approved knowledge and limiting its scope to what it’s supposed to handle.
- Second, there’s tone and personality. The bot should reflect the brand’s voice consistently, whether that’s formal and professional or friendly and approachable. Clear guidelines on language, phrasing, and tone help keep it on-brand.
- Then, you’ve got compliance and ethics. The chatbot needs to follow regulations, GDPR for privacy, and limiting generating content that could be perceived as biased, offensive, or inappropriate. Regular testing and bias audits are important here.
- Finally, there’s fallbacks and escalations. No bot is perfect, so you need to put in place seamless handover to a human when the AI gets stuck. It’s about setting realistic expectations and not pretending the bot can do everything.
Human customer service agents may not be as positive about the shift to leveraging AI agent support as the leadership. One of the softer but critical aspects of successful implementation will be training staff to get comfortable with using AI to their advantage.
“Customer service leaders should focus on training their teams to work alongside AI, encouraging a culture of adaptability, and encouraging collaboration between AI and human agents. Of course, AI powered Agents will bring automation to the marketplace, but it will also enable new services that will require the work of human experts,” said Laporte.
Other Big Players and Plans
Salesforce enters the AI agent world with a new service that builds on the company’s Einstein platform for AI. The new Einstein Service Agent is a generative AI-powered self-service experience for customers, designed for end-customers to use, providing a conversational AI interface to answer questions and get resolution for any number of issues. The new agent service uses a similar foundation as Einstein Copilot, which is employee-facing, for internal use within an organisation.
Graphwise announced the immediate availability of GraphDB 10.8. This release includes the next-generation Talk-to-Your-Graph capability that integrates large language models (LLMs) with vector-based retrieval of relevant enterprise information and precise querying of knowledge graphs.
Amazon Bedrock simplifies the process of developing and scaling generative AI applications powered by large language models (LLMs) and other foundation models (FMs). It offers access to a diverse range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazon’s proprietary Amazon Titan models. Additionally, Amazon Bedrock Knowledge Bases empowers businesses to develop applications that harness the power of Retrieval Augmented Generation (RAG), an approach where retrieving relevant information from data sources enhances the model’s ability to generate contextually appropriate and informed responses.
Amazon Lex provides advanced conversational interfaces using voice and text channels. It features natural language understanding capabilities to recognise more accurate identification of user intent and fulfills the user intent faster. The generative AI capability of QnAIntent in Amazon Lex lets brands securely connect FMs to company data for RAG. QnAIntent provides an interface to use enterprise data and FMs on Amazon Bedrock to generate relevant, accurate, and contextual responses. One can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels, such as Amazon Connect.