[[{“value”:”
The rise of AI has opened new avenues for enhancing customer experiences across multiple channels. Technologies like natural language understanding (NLU) are employed to discern customer intents, facilitating efficient self-service actions. Automatic speech recognition (ASR) translates spoken words into text, enabling seamless voice interactions. With Amazon Lex bots, businesses can use conversational AI to integrate these capabilities into their call centers. Amazon Lex uses ASR and NLU to comprehend customer needs, guiding them through their journey. These AI technologies have significantly reduced agent handle times, increased Net Promoter Scores (NPS), and streamlined self-service tasks, such as appointment scheduling.
The advent of generative AI further expands the potential to enhance omnichannel customer experiences. However, concerns about security, compliance, and AI hallucinations often deter businesses from directly exposing customers to large language models (LLMs) through their omnichannel solutions. This is where the integration of Amazon Lex and Amazon Bedrock becomes invaluable. In this setup, Amazon Lex serves as the initial touchpoint, managing intent classification, slot collection, and fulfillment. Meanwhile, Amazon Bedrock acts as a secondary validation layer, intervening when Amazon Lex encounters uncertainties in understanding customer inputs.
In this post, we demonstrate how to integrate LLMs into your omnichannel experience using Amazon Lex and Amazon Bedrock.
Enhancing customer interactions with LLMs
The following are three scenarios illustrating how LLMs can enhance customer interactions:
- Intent classification – These scenarios occur when a customer clearly articulates their intent, but the lack of utterance training data results in poor performance by traditional models. For example, a customer might call in and say, “My basement is flooded, there is at least a foot of water, and I have no idea what to do.” Traditional NLU models might lack the training data to handle this out-of-band response, because they’re typically trained on sample utterances like “I need to make a claim,” “I have a flood claim,” or “Open claim,” which are mapped to a hypothetical
StartClaim
intent. However, an LLM, when provided with the context of each intent including a description and sample utterances, can accurately determine that the customer is dealing with a flooded basement and is seeking to start a claim. - Assisted slot resolution (built-in) and custom slot assistance (custom) – These scenarios occur when a customer says an out-of-band response to a slot collection. For select built-in slot types such as
AMAZON.Date
,AMAZON.Country
, andAMAZON.Confirmation
, Amazon Lex currently has a built-in capability to handle slot resolution for select built-in slot types. For custom slot types, you would need to implement custom logic using AWS Lambda for slot resolution and additional validation. This solution handles custom slot resolution by using LLMs to clarify and map these inputs to the correct slots. For example, interpreting “Toyota Tundra” as “truck” or “the whole dang top of my house is gone” as “roof.” This allows you to integrate generative AI to validate both your pre-built slots and your custom slots. - Background noise mitigation – Many customers can’t control the background noise when calling into a call center. This noise might include a loud TV, a sidebar conversation, or non-human sounds being transcribed as voice (for example, a car passing by and is transcribed as “uhhh”). In such cases, the NLU model, depending on its training data, might misclassify the caller’s intent or require the caller to repeat themselves. However, with an LLM, you can provide the transcript with appropriate context to distinguish the noise from the customer’s actual statement. For example, if a TV show is playing in the background and the customer says “my car” when asked about their policy, the transcription might read “Tune in this evening for my car.” The LLM can ignore the irrelevant portion of the transcription and focus on the relevant part, “my car,” to accurately understand the customer’s intent.
As demonstrated in these scenarios, the LLM is not controlling the conversation. Instead, it operates within the boundaries defined by intents, intent descriptions, slots, sample slots, and utterances from Amazon Lex. This approach helps guide the customer along the correct path, reducing the risks of hallucination and manipulation of the customer-facing application. Furthermore, this approach reduces cost, because NLU is used when possible, and the LLM acts as a secondary check before re-prompting the customer.
You can further enhance this AI-driven experience by integrating it with your contact center solution, such as Amazon Connect. By combining the capabilities of Amazon Lex, Amazon Bedrock, and Amazon Connect, you can deliver a seamless and intelligent customer experience across your channels.
When customers reach out, whether through voice or chat, this integrated solution provides a powerful, AI-driven interaction:
- Amazon Connect manages the initial customer contact, handling call routing and channel selection.
- Amazon Lex processes the customer’s input, using NLU to identify intent and extract relevant information.
- In cases where Amazon Lex might not fully understand the customer’s intent or when a more nuanced interpretation is needed, advanced language models in Amazon Bedrock can be invoked to provide deeper analysis and understanding.
- The combined insights from Amazon Lex and Amazon Bedrock guide the conversation flow in Amazon Connect, determining whether to provide automated responses, request more information, or route the customer to a human agent.
Solution overview
In this solution, Amazon Lex will connect to Amazon Bedrock through Lambda, and invoke an LLM of your choice on Amazon Bedrock when assistance in intent classification and slot resolution is needed throughout the conversation. For instance, if an ElicitIntent
call defaults to the FallbackIntent
, the Lambda function runs to have Amazon Bedrock determine if the user potentially used out-of-band phrases that should be properly mapped. Additionally, we can augment the prompts sent to the model for intent classification and slot resolution with business context to yield more accurate results. Example prompts for intent classification and slot resolution is available in the GitHub repo.
The following diagram illustrates the solution architecture:
The workflow consists of the following steps:
- Messages are sent to the Amazon Lex omnichannel using Amazon Connect (text and voice), messaging apps (text), and third-party contact centers (text and voice). Amazon Lex NLU maps user utterances to specific intents.
- The Lambda function is invoked at certain phases of the conversation where Amazon Lex NLU didn’t identify the user utterance, such as during the fallback intent or during slot fulfillment.
- Lambda calls foundation models (FMs) selected from an AWS CloudFormation template through Amazon Bedrock to identify the intent, identify the slot, or determine if the transcribed messages contain background noise.
- Amazon Bedrock returns the identified intent or slot, or responds that it is unable to classify the utterance as a related intent or slot.
- Lambda sets the state of Amazon Lex to either move forward in the selected intent or re-prompt the user for input.
- Amazon Lex continues the conversation by either re-prompting the user or continuing to fulfill the intent.
Prerequisites
You should have the following prerequisites:
- An AWS account with privileges to create AWS Identity and Access Management (IAM) roles and policies. For more information, see Overview of access management: Permissions and policies.
- Familiarity with AWS services such as Amazon Lex, AWS Lambda, and Amazon Bedrock.
- Access enabled for FMs on Amazon Bedrock. For instructions, see Access Amazon Bedrock foundation models. In this solution, you have a choice to use Anthropic’s Claude 3 Haiku, Anthropic’s Claude 3.5 Haiku, or Anthropic’s Claude 3.5 Sonnet.
Deploy the omnichannel Amazon Lex bot
To deploy this solution, complete the following steps:
- Choose Launch Stack to launch a CloudFormation stack in
us-east-1
:
- For Stack name, enter a name for your stack. This post uses the name
FNOLBot
. - In the Parameters section, select the model you want to use.
- Review the IAM resource creation and choose Create stack.
After a few minutes, your stack should be complete. The core resources are as follows:
- Amazon Lex bot –
FNOLBot
- Lambda function –
ai-assist-lambda-{Stack-Name}
- IAM roles –
{Stack-Name}-AIAssistLambdaRole
, and{Stack-Name}-BotRuntimeRole
Test the omnichannel bot
To test the bot, navigate to FNOLBot
on the Amazon Lex console and open a test window. For more details, see Testing a bot using the console.
Intent classification
Let’s test how, instead of saying “I would like to make a claim,” the customer can ask more complex questions:
- In the test window, enter in “My neighbor’s tree fell on my garage. What steps should I take with my insurance company?”
- Choose Inspect.
In the response, the intent has been identified as GatherFNOLInfo
.
Background noise mitigation with intent classification
Let’s simulate making a request with background noise:
- Refresh the bot by choosing the refresh icon.
- In the test window, enter “Hi yes I’m calling about yeah yeah one minute um um I need to make a claim.”
- Choose Inspect.
In the response, the intent has been identified as GatherFNOLInfo
.
Slot assistance
Let’s test how instead of saying explicit slot values, we can use generative AI to help fill the slot:
- Refresh the bot by choosing the refresh icon.
- Enter “I need to make a claim.”
The Amazon Lex bot will then ask “What portion of the home was damaged?”
- Enter “the whole dang top of my house was gone.”
The bot will then ask “Please describe any injuries that occurred during the incident.”
- Enter “I got a pretty bad cut from the shingles.”
- Choose Inspect.
You will notice that the Damage
slot has been filled with “roof” and the PersonalInjury
slot has been filled with “laceration.”
Background noise mitigation with slot assistance
We now simulate how Amazon Lex uses ASR transcribing background noise. The first scenario is a conversation where the user is having a conversation with others while talking to the Amazon Lex bot. In the second scenario, a TV on in the background is so loud that it gets transcribed by ASR.
- Refresh the bot by choosing the refresh icon.
- Enter “I need to make a claim.”
The Amazon Lex bot will then ask “What portion of the home was damaged?”
- Enter “yeah i really need that soon um the roof was damaged.”
The bot will then ask “Please describe any injuries that occurred during the incident.”
- Enter “tonight on the nightly news reporters are on the scene um i got a pretty bad cut.”
- Choose Inspect.
You will notice that the Damage
slot has been filled with “roof” and the PersonalInjury
slot has been filled with “laceration.”
Clean up
To avoid incurring additional charges, delete the CloudFormation stacks you deployed.
Conclusion
In this post, we showed you how to set up Amazon Lex for an omnichannel chatbot experience and Amazon Bedrock to be your secondary validation layer. This allows your customers to potentially provide out-of-band responses both at the intent and slot collection levels without having to be re-prompted, allowing for a seamless customer experience. As we demonstrated, whether the user comes in and provides a robust description of their intent and slot or if they use phrases that are outside of the Amazon Lex NLU training data, the LLM is able to correctly identify the correct intent and slot.
If you have an existing Amazon Lex bot deployed, you can edit the Lambda code to further enhance the bot. Try out the solution from CloudFormation stack or code in the GitHub repo and let us know if you have any questions in the comments.
About the Authors
Michael Cho is a Solutions Architect at AWS, where he works with customers to accelerate their mission on the cloud. He is passionate about architecting and building innovative solutions that empower customers. Lately, he has been dedicating his time to experimenting with Generative AI for solving complex business problems.
Joe Morotti is a Solutions Architect at Amazon Web Services (AWS), working with Financial Services customers across the US. He has held a wide range of technical roles and enjoy showing customer’s art of the possible. His passion areas include conversational AI, contact center, and generative AI. In his free time, he enjoys spending quality time with his family exploring new places and over analyzing his sports team’s performance.
Vikas Shah is an Enterprise Solutions Architect at Amazon Web Services. He is a technology enthusiast who enjoys helping customers find innovative solutions to complex business challenges. His areas of interest are ML, IoT, robotics and storage. In his spare time, Vikas enjoys building robots, hiking, and traveling.
“}]] In this post, we show you how to set up Amazon Lex for an omnichannel chatbot experience and Amazon Bedrock to be your secondary validation layer. This allows your customers to potentially provide out-of-band responses both at the intent and slot collection levels without having to be re-prompted, allowing for a seamless customer experience. Read More Amazon Bedrock, Amazon Lex, Amazon Machine Learning, Artificial Intelligence, Generative AI, Intermediate (200), Technical How-to