Today, customers can raise support tickets through multiple channels like – web, mobile, chat-bots, emails, or phone calls. When a support ticket is raised by a customer, it is processed and assigned to a category based on the information provided in the ticket. It is then routed to the support group for resolution according to the category of the ticket. It is estimated that a high number of support tickets are usually not routed to the right group due to incorrect ticket categorization. Incorrectly assigned tickets cause delay in overall resolution time, often resulting in severe customer dissatisfaction. It may also have other widespread impacts like financial, operational, or other business repercussions. Hence, ticket classification is an essential task for every organization these days. Although you may classify tickets manually, but it is prone to error, not cost-effective, and does not scale.

AWS Managed Services (AMS) uses Amazon Comprehend custom classifications to categorize inbound requests by resource and operation type based on how the customer described their issue. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to uncover valuable insights and connections in text. AMS utilizes custom classifiers to label customer requests with appropriate issue types, resource type, and resource action, thereby routing customer tickets to the SMEs. Amazon Comprehend classification is utilized to find opportunities for new internal automation tools that AMS engineers can use to fulfill customer requirements to reduce manual effort and chances of manual errors. The classification data is stored in an Amazon Redshift cluster and used to analyze customer requests and find new automation tool candidates. This automation results in increased operational efficiency and reduced cost.

In this post, we show how managed service providers can use Amazon Comprehend to classify and route the tickets, provide suggestions based on the classification, and utilize the classification data.

Solution overview

The following diagram shows the solution architecture.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

The workflow is as follows:

  1. A customer submits the ticket.
  2. The ticket system receives the ticket from the customer, and invokes the ticket classifier AWS Lambda function with the ticket details. Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Lambda is chosen for the solution to reduce cost and maintenance effort.
  3. The ticket classifier Lambda function classifies the ticket with Amazon Comprehend using the ticket title and description. With Amazon Comprehend, you can train the NLP model and provide both batch and real-time classifiers without provisioning and maintaining infrastructure.
  4. The ticket classifier Lambda function pushes the ticket classification data to the Amazon Redshift cluster via Amazon Kinesis Data Firehose. Kinesis Data Firehose is an extract, transform, and load (ETL) service that captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and ML to deliver the best price performance at any scale. Kinesis Data Firehose delivers data to an Amazon Simple Storage Service (Amazon S3) bucket first and then issues an Amazon Redshift COPY command to load the data into an Amazon Redshift cluster.
  5. The ticket classifier Lambda function invokes the ticket handler Lambda function.
  6. The ticket handler Lambda function runs code to help the ticket handling. In this example, it returns the recommended materials for handling the ticket based on the classification.
  7. Ticket analysis can be done with Amazon QuickSight. From ticket analysis, you can find out the top requested ticket type. Based on the analysis, you can discover ticket trends and opportunities to automate top ticket types. QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are.

In the following sections, we walk you through the steps to implement the solution, integrate the ticket classification infrastructure with your ticketing system, and use the classification data with QuickSight.

Implement the solution

In this section, we walk through the steps to provision your solution resources and create the necessary infrastructure.

Configure Amazon Comprehend

In this step, we train two new Amazon Comprehend custom classification models: Operation and Resource, and create a real-time analysis endpoint for each model.

Upload the training data

To upload the training data, complete the following steps:

  1. Download ticket_training_data.zip and unzip the file.
    This folder contains the following two files:
    • training_data_operations.csv – This file is a two-column CSV file that we use to train the Operation classification model. The first column contains class, and the second column contains document.
    • training_data_resources.csv – This file is a two-column CSV file that we use to train the Resource classification model. Like the training_data_operations.csv file, the first column contains class, and the second column contains document.
  2. On the Amazon S3 console, create a new bucket for Amazon Comprehend. Because S3 bucket names are globally unique, you need to create a unique name for the bucket. For this post, we call it comprehend-ticket-training-data. Enable server-side encryption and block public access when creating the bucket.
  3. Upload training_data_operations.csv and training_data_resources.csv to the new S3 bucket.

Create two new models

To create your models, complete the following steps:

  1. On the Amazon Comprehend console, choose Custom classification in the navigation pane.
  2. Choose Create new model.
  3. Provide the following information:
    1. For Model name, enter ticket-classification-operation.
    2. For Language, choose English.
    3. For Classifier mode, select Using Single-label mode.
    4. For Data format, select CSV file.
    5. For Training dataset, enter the S3 path for training_data_operations.csv.
    6. For Test data source, select Autosplit.
      Autosplit automatically selects 10% of your provided training data to use as testing data.
    7. For IAM Role, select Create an IAM role.
    8. For Permissions to access, choose the training, test, and output data (if specified) in your S3 buckets.
    9. For Name suffix, enter ticket-classification.
  4. Choose Create.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  5. Choose Create new model again to create your resource classification model.
  6. Provide the following information:
    1. For Model name, enter ticket-classification-resource.
    2. For Language, choose English.
    3. For Classifier mode, select Using Single-label mode.
    4. For Data format, select CSV file.
    5. For Training dataset, enter the S3 path for training_data_resources.csv.
    6. For Test data source, select Autosplit.
    7. For IAM Role, select Use an existing IAM role.
    8. For Role name, choose AmazonComprehendServiceRole-ticket-classification.
  7. Choose Create.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Amazon Comprehend is now processing the CSV files and using them to train custom classifiers. We then use these to help classify customer tickets. The larger and more accurate our training data is, the more accurate the classifier will be.

Wait for the version status to show as Trained as below. It may take up to 1 hour to complete, depending on the size of the training data.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Create Amazon Comprehend endpoints

Amazon Comprehend endpoints are billed in 1-second increments, with a minimum of 60 seconds. Charges continue to incur from the time you start the endpoint until it’s deleted, even if no documents are analyzed. For more information, see Amazon Comprehend Pricing. To create your endpoints, complete the following steps:

  1. On the Amazon Comprehend console, choose Endpoints in the navigation pane.
  2. Choose Create endpoint to create your operation classification endpoint.
  3. Provide the following information:
    1. For Endpoint name, enter ticket-classification-operation.
    2. For Custom model type, select Custom classification.
    3. For Classifier model, choose ticket-classification-operation.
    4. For Version, choose No Version Name.
    5. For Number of inference units (IUs), enter 1.
  4. Choose Create endpoint.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  5. Choose Create endpoint again to create the resource classification endpoint.
  6. Provide the following information:
    1. For Endpoint name, enter ticket-classification-resource.
    2. For Custom model type, select Custom classification.
    3. For Classifier model, choose ticket-classification-resource.
    4. For Version, choose No Version Name.
    5. For Number of inference units (IUs), enter 1.
  7. Choose Create endpoint.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

After you create both endpoints, wait until the status for both shows as Active.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Test the Amazon Comprehend endpoints with real-time analysis

To test your endpoints, complete the following steps:

  1. On the Amazon Comprehend console, choose Real-time analysis in the navigation pane.
  2. For Analysis type¸ select Custom.
  3. For Endpoint¸ choose ticket-classification-operation.
  4. For Input text, enter the following:
    Hi support,
    Please update the timezone to UTC on i-abcd1234.
    Thanks.

  5. Choose Analyze.
    The results show that the Update class has the highest confidence score.
  6. Change Endpoint to ticket-classification-resource and choose Analyze again.

The results show that the EC2 class has the highest confidence score.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Create a secret for the Amazon Redshift cluster password

In this step, we create an AWS Secrets Manager secret for your Amazon Redshift cluster password. Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. In this post, we store the Amazon Redshift cluster password in a Secrets Manager secret.

  1. On the Secrets Manager console, choose Secrets in the navigation pane.
  2. Choose Store a new secret.
  3. For Secret type, select Other type of secret.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  4. Under Key/value pairs, set your key as password and value as your Amazon Redshift cluster password.
    The password must be between 8–64 characters in length and contain at least one uppercase letter, one lowercase letter, and one number. It can be any printable ASCII character except ‘ (single quote), “ (double quote), \, /, @, or space.
  5. Choose Next.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  6. For Secret name, enter ClassificationRedshiftClusterPassword.
  7. Choose Next.
  8. In the Secret rotation section, choose Next.
  9. Review your secret configuration and choose Store.

Provision your infrastructure with AWS CloudFormation

In this step, we provision the infrastructure for the solution using an AWS CloudFormation stack.

Upload the Lambda function code

Before launching the CloudFormation stack, upload your Lambda function code:

  1. Download lambda_code.zip
  2. On the Amazon S3 console, open the bucket that you created.
  3. Upload lambda_code.zip.

Create your CloudFormation stack

To provision resources with AWS CloudFormation, complete the following steps:

  1. Download cloudformation_template.json.
  2. On the AWS CloudFormation console, choose Create stack.
  3. Select With new resources (standard).
  4. For Template source, choose Upload a template file.
  5. Choose the downloaded CloudFormation template.
  6. Choose Next.
  7. For Stack name, enter Ticket-Classification-Infrastructure.
  8. In the Parameters section, enter the following values:
    1. For ClassificationRedshiftClusterNodeType, enter the Amazon Redshift cluster node type. dc2.large is the default.
    2. For ClassificationRedshiftClusterPasswordSecretName, enter the Secrets Manager secret name that stores the Amazon Redshift cluster password.
    3. For ClassificationRedshiftClusterSubnetId, enter the subnet ID where the Amazon Redshift Cluster is hosted. The subnet must be within the VPC which you mentioned in the ClassificationRedshiftClusterVpcId parameter.
    4. For ClassificationRedshiftClusterUsername, enter the Amazon Redshift cluster user name.
    5. For ClassificationRedshiftClusterVpcId, enter the VPC ID where the Amazon Redshift cluster is hosted.
    6. For LambdaCodeS3Bucket, enter the S3 bucket name where you uploaded the Lambda code.
    7. For LambdaCodeS3Key, enter the Amazon S3 key of the deployment package.
    8. For QuickSightRegion, enter the Region for QuickSight. The Region for QuickSight should be consistent with the Region you’re using for Amazon Comprehend and the S3 bucket.
  9. Choose Next.
  10. In the Configure stack options section, choose Next.
  11. In the Review section, select I acknowledge that AWS CloudFormation might create IAM resources.
  12. Choose Create stack.

Configure your Amazon Redshift cluster

In this step, you enable audit logging and add the new table to the Amazon Redshift cluster created through the CloudFormation template.

Audit logging is not turned on by default in Amazon Redshift. When you turn on logging on your cluster, Amazon Redshift exports logs to Amazon CloudWatch, which capture data from the time audit logging is enabled to the present time. Each logging update is a continuation of the previous logs.

Enable audit logging

You can skip this step if you don’t need audit logging for your Amazon Redshift cluster.

  1. On the Amazon Redshift console, choose Clusters in the navigation pane.
  2. Choose the Amazon Redshift cluster starting with classificationredshiftcluster-.
  3. On the Properties tab, choose Edit.
  4. Choose Edit audit logging.
  5. For Configure audit logging¸ choose Turn on.
  6. For Log expert type, choose CloudWatch.
  7. Select all log types.
  8. Choose Save changes.

Create new table

To create a new table, complete the following steps:

  1. On the Amazon Redshift console, choose Query data.
  2. Choose Query in query editor v2.
  3. On the Database page, choose your cluster.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  4. For Database, enter ticketclassification.
  5. Enter the user name and password you configured in the CloudFormation stack parameters.
  6. Choose Create connection.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  7. When the connection is made, choose the plus sign and open a new query window.
  8. Enter the following query:
    CREATE TABLE tickets( id VARCHAR(50) NOT NULL, title VARCHAR(1000) NOT NULL, description VARCHAR(5000) NOT NULL, creation_time TIMESTAMPTZ NOT NULL, operation VARCHAR(5000) NULL, resource VARCHAR(5000) NULL
    );

  9. Choose Run.

Test the classification infrastructure

Now the infrastructure for ticket classification is ready. Before integrating with your ticket system, let’s test the classification infrastructure.

Run the test

To run the test, complete the following steps:

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose the function that starts with Ticket-Classification-Inf-TicketClassifier.
  3. On the Test tab, choose Test event.
  4. For Name, enter TestTicket.
  5. Enter the following test data:
    { "TicketId": "00000001", "TicketTitle": "Update the timezone", "TicketDescription": "Hi support, Please update the timezone to UTC on i-abcd1234. Thanks.", "TicketCreationTime": "2020-12-04 03:09:00-08"
    }

  6. Choose Test.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

The ticket is classified, and the classification data is stored in the Amazon Redshift cluster. After the classification, the ticket handler Lambda function runs, which handles the ticket based on the classification, including recommending materials to support engineers.

Check the ticket classifier test log

To check the test log, complete the following steps:

  1. In the result section of the test, choose Logs, or choose View logs in CloudWatch on the Monitor tab.
  2. Choose the log stream.

You can view the logs in the following screenshot, which shows the output from Amazon Comprehend and the final top classification of the ticket. In this example, the test ticket is classified as Resource=EC2, Operation=Update.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Check the ticket classification output in the Amazon Redshift cluster

To validate the output in your cluster, complete the following steps:

  1. On the Amazon Redshift query editor v2 console, choose the plus sign to open a new query window.
  2. Enter the following query:
    SELECT * FROM "tickets";

  3. Choose Run.

The following screenshot shows the ticket classification. If it’s not available yet, wait for a few minutes and retry (Kinesis Data Firehose needs some time to push the data). We can now use this data in QuickSight.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Check the ticket handler test log

After the ticket classifier pushes the classification data in the Amazon Redshift cluster, the ticket handler Lambda function runs, which handles the ticket based on the classification, including recommending materials to support engineers. In this example, the ticket handler returns recommended materials including the runbook, AWS documentation, and SSM documents so support can refer to them when handling the ticket. You can integrate the output with your ticket handling system, and you can customize the handling processes in the Lambda function code. In this step, we check what recommendations were made.

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose the Lambda function that starts with Ticket-Classification-Inf-TicketHandlerLambdaFunct.
  3. On the Monitor tab, choose View logs in CloudWatch.
  4. Choose the log stream.

The following screenshot shows the logs. You can see the output from Amazon Comprehend and the list of recommended AWS documents and SSM documents for the ticket classified as Update EC2. You can add your own runbooks, documents, SSM documents, or any other materials in the Lambda function code.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Integrate the ticket classification infrastructure with your ticketing system

In this section, we walk through the steps to integrate your ticketing classification infrastructure with your ticketing system and customize your configuration.

Most ticketing systems have a trigger feature, which allows you to run code when the ticket is submitted. Set up your ticketing system to invoke the ticket classifier Lambda function with the following formatted input:

{ "TicketId": "{{ Ticket ID }}", "TicketTitle": "{{ Ticket Title }}", "TicketDescription": "{{ Ticket Description }}", "TicketCreationTime": "{{ Ticket Creation Time. e.g. 2020-12-04 03:09:00-08 }}"
}

If you want to customize the input, modify the ticket classifier Lambda function code. You need to add or remove parameters (lines 90–105) and customize the input for Amazon Comprehend (lines 15–17).

You can customize the ticket handler Lambda function to run automation or edit the recommendations. For example, you can add the internal comment to the ticket with the recommendations. To customize, open the ticket handler Lambda code, and edit lines 68–70 and 75–81.

Use classification data with QuickSight

After you integrate the ticket classification infrastructure with your ticket system, the ticket classification data is stored in the Amazon Redshift cluster. You can use QuickSight to check this data and generate reports. In this example, we generate a QuickSight analysis with the classification data.

Sign up for QuickSight

If you don’t already have QuickSight, sign up with the following steps:

  1. On the QuickSight console, choose Sign up for QuickSight.
  2. Choose Standard.
  3. Under QuickSight region, choose the Region you configured in the CloudFormation parameter QuickSightRegion.
  4. Under Account info, enter your QuickSight account name and notification email address.
  5. Under QuickSight access to AWS services, select Amazon Redshift.
  6. If you want to allow access and autodiscovery for other resources, select them as well.
  7. Choose Finish.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  8. Choose Go to Amazon QuickSight after you’re signed up.

Connect your Amazon Redshift cluster to QuickSight

To connect your cluster to QuickSight as a data source, complete the following steps:

  1. On the QuickSight console, choose Datasets in the navigation pane.
  2. Choose New dataset.
  3. Choose Redshift Auto-discovered.
  4. Provide the following information:
    1. For Data source name, enter ticketclassification.
    2. For Instance ID, choose the Amazon Redshift cluster starting with classificationredshiftcluster-.
    3. For Connection type, choose Public network.
    4. For Database name, enter ticketclassification.
    5. Enter the Amazon Redshift cluster user name and password you configured in the CloudFormation stack parameters.
  5. Choose Validate connection to see if the connection works.
    If it doesn’t work, this is likely due to using the wrong user name and password, or the QuickSight Region is different from what you specified in the CloudFormation stack.
  6. Choose Create data source.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  7. In the Choose your table section, select the tickets table.
  8. Choose Select.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
  9. Select Import to SPICE for quicker analytics.
    SPICE is the QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. Importing (also called ingesting) your data into SPICE can save time and money. For more information on SPICE, refer to Importing Data into SPICE. If you get the error “Not enough SPICE capacity,” purchase more SPICE capacity. For more information, refer to Purchasing SPICE capacity in an AWS Region.
  10. Choose Visualize.
    Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Create a ticket classification analysis report

Once you finish dataset creation, you can see the new QuickSight analysis. In this section, we walk through the steps to create a ticket classification analysis report, including a pivot table, pie charts, and line charts.

  1. Choose AutoGraph.
  2. Under Visual types, choose the pivot table.
  3. Drag operation from Fields list to Rows.
  4. Drag resource from Fields list to Columns.
  5. On the Add menu, choose Add visual.
  6. Under Visual types, choose the pie chart.
  7. Drag operation from Fields list to Group/Color.
  8. On the Add menu, choose Add visual again.
  9. Under Visual types, choose the pie chart again.
  10. Drag resource from Fields list to Group/Color.
  11. On the Add menu, choose Add visual again.
  12. Under Visual types, choose the line chart.
  13. Drag creation_time from Fields list to X axis.
  14. Drag operation from Fields list to Color.
  15. On the Add menu, choose Add visual again.
  16. Under Visual types, choose the line chart again.
  17. Drag creation_time from Fields list to X axis.
  18. Drag operation from Fields list to Color.
  19. Resize and reorder the charts as needed.
  20. Choose Save as.
  21. Enter a name for your analysis and choose Save.

Congratulations! Your first ticket analysis is ready. Once you have more data, the analysis will look like the following screenshot.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,

Clean up

In this step, we clean up the resources we created with various services.

Amazon Comprehend

To delete your endpoints, complete the following steps:

  1. On the Amazon Comprehend console, choose Endpoints in the navigation pane.
  2. Select the endpoint ticket-classification-operation.
  3. Choose Delete and follow the prompts.
  4. Repeat these steps to delete the ticket-classification-resource endpoint.
    Next, delete the custom classifications you created.
  5. Choose Custom classification in the navigation pane.
  6. Select the classification ticket-classification-operation.
  7. Select No Version Name.
  8. Choose Delete and follow the prompts.
  9. Repeat these steps to delete the ticket-classification-resource classification.

Amazon S3

Next, clean up the S3 bucket you created.

  1. On the Amazon S3 console, select the bucket you created.
  2. Delete all the objects in the bucket.
  3. Delete the bucket.

Amazon QuickSight

Delete the QuickSight analyses and dataset you created.

  1. On the QuickSight console, choose Analyses in the navigation pane.
  2. Choose the options icon (three dots) on the analysis you created.
  3. Choose Delete and follow the prompts.
  4. Choose Datasets in the navigation pane.
  5. Choose the tickets dataset.
  6. Choose Delete dataset and follow the prompts.

AWS CloudFormation

Clean up the resources you created as part of the CloudFormation stack.

  1. On the AWS CloudFormation console, choose Stacks in the navigation pane.
  2. Choose the Ticket-Classification-Infrastructure stack.
  3. On the Resources tab, choose the physical ID of ClassificationDeliveryStreamS3Bucket.
    The Amazon S3 console opens.
  4. Delete any objects in this bucket.
  5. Return to the AWS CloudFormation console, choose Delete, and follow the prompts.

AWS Secrets Manager

Lastly, delete the Secrets Manager secret.

  1. On the Secrets Manager console, select the secret ClassificationRedshiftClusterPassword.
  2. On the Actions menu, choose Delete secret.
  3. Set the waiting period as 7 days and choose Schedule Delete.

Your secret will be automatically deleted after 7 days.

Conclusion

In this post, you learned how to utilize AWS services to create an automatic classification and recommendation system. This solution will help your organizations build the following workflow:

  1. Classify customer requests.
  2. Recommend automated solutions.
  3. Analyze customer request classifications and discover top customer requests.
  4. Release a new automated solution and increase the automation rate.

For more information about Amazon Comprehend, see Amazon Comprehend Documentation. You can also discover other Amazon Comprehend features and get inspiration from other AWS blog posts about using Amazon Comprehend beyond classification.


About the Authors

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,Seongyeol Jerry Cho is a Senior Systems Development Engineer at AWS Managed Services based in Sydney, Australia. He focuses on building highly scalable and automated cloud operations software using a variety of technologies, including machine learning. Outside of work, he enjoys travel, camping, reading, cooking, and running.

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,Manu Sasikumar is a Sr. Systems Engineer Manager with AWS Managed Services. Manu and his team focus on building powerful and easy-to-use automations to reduce manual effort, and build AI and ML-based solutions for managing customer requests. Outside of work, he loves spending his spare time with his family, as well as being part of various humanitarian and volunteer activities.

Read more about this on: AWS