The volume of user-generated content (UGC) and third-party content has been increasing substantially in sectors like social media, ecommerce, online advertising, and photo sharing. However, such content needs to be reviewed to ensure that end-users aren’t exposed to inappropriate or offensive material, such as nudity, violence, adult products, or disturbing images. Today, some companies simply react to user complaints to take down offensive images, ads, or videos, whereas many employ teams of human moderators to review small samples of content. However, human moderators alone can’t scale to meet these needs, leading to a poor user experience or even a loss of brand reputation.

With Amazon Rekognition, you can automate or streamline your image and video analysis workflows using machine learning (ML). Amazon Rekognition provides an image moderation API that can detect unsafe or inappropriate content containing nudity, suggestiveness, violence, and more. You get a hierarchical taxonomy of labels that you can use to define your business rules, without needing any ML experience. Each detection by Amazon Rekognition comes with a confidence score between 0–100, which provides a measure of how confident the ML model is in its prediction.

Content moderation still requires human reviewers to audit results and judge nuanced situations where AI may not be certain in its prediction. Combining machine predictions with human judgment and managing the infrastructure needed to set up such workflows is hard, expensive, and time-consuming to do at scale. This is why we built Amazon Augmented AI (Amazon A2I), which lets you implement a human review of ML predictions and is directly integrated with Amazon Rekognition. Amazon A2I allows you to use in-house, private, or third-party vendor workforces with a web interface that has instructions and tools they need to complete their review tasks.

You can easily set up the criteria that triggers a human review of a machine prediction; for example, you can send an image for further human review if Amazon Rekognition’s confidence score is between 50–90. Amazon Rekognition handles the bulk of the work and makes sure that every image gets scanned, and Amazon A2I helps send the remaining content for further review to best utilize human judgment. Together, this helps ensure that you get full moderation coverage while maintaining very high accuracy, at a fraction of the cost to review each image manually.

In this post, we show you how to use Amazon Rekognition image moderation APIs to automatically detect explicit adult, suggestive, violent, and disturbing content in an image and use Amazon A2I to onboard human workforces, set up human review thresholds of the images, and define human review tasks. When these conditions are met, images are sent to human reviewers for further review, which is performed according to the instructions in the human review task definition.

Prerequisites

This post requires you to complete the following prerequisites:

  • Create an AWS Identity and Access Management (IAM) role. To create a human review workflow, you need to provide an IAM role that grants Amazon A2I permission to access Amazon Simple Storage Service (Amazon S3) for reading objects to render in a human task UI and writing the results of the human review. This role also needs an attached trust policy to give Amazon SageMaker permission to assume the role. This allows Amazon A2I to perform actions in accordance with permissions that you attach to the role. For example policies that you can modify and attach to the role you use to create a flow definition, see Add Permissions to the IAM Role Used to Create a Flow Definition.
  • Configure permission to invoke the Amazon Rekognition DetectModerationLabels You need to attach the AmazonRekognitionFullAccess policy to the AWS Lambda function that calls the Amazon Rekognition detect_moderation_labels API.
  • Provide Amazon S3 Access, Put, and Get permission to Lambda if you wish to have Lambda use Amazon S3 to access images for analysis.
  • Give the Lambda function AmazonSageMakerFullAccess access to the Amazon A2I services for the human review.

Creating a private work team

A work team is a group of people that you select to review your documents. You can create a work team from a workforce, which is made up of Amazon Mechanical Turk workers, vendor-managed workers, or your own private workers that you invite to work on your tasks. Whichever workforce type you choose, Amazon A2I takes care of sending tasks to workers. For this post, you create a work team using a private workforce and add yourself to the team to preview the Amazon A2I workflow.

To create your private work team, complete the following steps:

  1. Navigate to the Labeling workforces page on the Amazon SageMaker console.
  2. On the Private tab, choose Create private team.build alerting and human review for images using amazon rekognition and amazon a2i hyperedge embed image
  3. For Team name, enter an appropriate team name.
  4. For Add workers, you can choose to add workers to your workforce by importing workers from an existing user group in AWS Cognito or by inviting new workers by email.

For this post, we suggest adding workers by email. If you create a workforce using an existing AWS Cognito user group, be sure that you can access an email in that workforce to complete this use case.

build alerting and human review for images using amazon rekognition and amazon a2i 1 hyperedge embed image

  1. Choose Create private team.
    build alerting and human review for images using amazon rekognition and amazon a2i 2 hyperedge embed image
  2. On the Private tab, choose the work team you just created to view your work team ARN.
  3. Record the ARN to use when you create a flow definition in the next section.

build alerting and human review for images using amazon rekognition and amazon a2i 3 hyperedge embed image

After you create the private team, you get an email invitation. The following screenshot shows an example email.

build alerting and human review for images using amazon rekognition and amazon a2i 4 hyperedge embed image

  1. Choose the link to log in and change your password.

You’re now registered as a verified worker for this team. The following screenshot shows the updated information on the Private tab.

build alerting and human review for images using amazon rekognition and amazon a2i 5 hyperedge embed image

Your one-person team is now ready, and you can create a human review workflow.

Creating a human review workflow

In this step, you create a human review workflow, where you specify your work team, identify where you want output data to be stored in Amazon S3, and create instructions to help workers complete your document review task.

To create a human review workflow, complete the following:

  1. In the Augmented AI section on the Amazon SageMaker console, navigate to the Human review workflows
  2. Choose Create human review workflow.

build alerting and human review for images using amazon rekognition and amazon a2i 6 hyperedge embed image

On this page, you configure your workflow.

  1. Enter a name for your workflow.
  2. Choose an S3 bucket where you want Amazon A2I to store the output of the human review.
  3. Choose an IAM role for the workflow.

You can create a new role automatically with Amazon S3 access and an Amazon SageMaker execution policy attached, or you can choose a role that already has these permissions attached.

  1. In the Task type section, select Rekognition – Image moderation.
    build alerting and human review for images using amazon rekognition and amazon a2i 7 hyperedge embed image
  2. In the Amazon Rekognition-Image Moderation – Conditions for invoking human review section, you can specify conditions that trigger a human review.

For example, if the confidence of the output label produced by Amazon Rekognition is between the range provided (70–100, for this use case), the document is sent to the portal for human review. You can also select different confidence thresholds for each image moderation output label through Amazon A2I APIs.

build alerting and human review for images using amazon rekognition and amazon a2i 8 hyperedge embed image

  1. In the Worker task template creation section, if you already have an A2I worker task template, you can choose Use your own template. Otherwise, select Create from a default template and enter a name and task description. For this use case, you can use the default worker instructions provided.
  2. In the Workers section, select Private.
  3. For Private teams, choose the private work team you created earlier.
    build alerting and human review for images using amazon rekognition and amazon a2i 9 hyperedge embed image
  4. Choose Create.

You’re redirected to the Human review workflows page, where you can see the name and ARN of the human review workflow you just created.

  1. Record the ARN to use in the next section.
    build alerting and human review for images using amazon rekognition and amazon a2i 10 hyperedge embed image

Configuring Lambda to run Amazon Rekognition

In this step, you create a Lambda function to call the Amazon Rekognition API detect_moderation_labels. You use the HumanLoopConfig parameter of detect_moderation_labels to integrate an Amazon A2I human review workflow into your Amazon Rekognition image moderation job.

  1. On the Lambda console, create a new function called A2IRegok.
  2. For Runtime, choose Python 3.7.
  3. Under Permission, choose Use an existing role.
  4. Choose the role you created.
  5. In the Function code section, remove the function code and replace it with the following code.
    1. Inside the Lambda function, import two libraries: uuid and boto3.
    2. Modify the function code as follows:
      1. Replace the FlowDefinationArn in line 12 with one you saved in the last step.
      2. On line 13, provide a unique name to the HumanLoopName or use uuid to generate a unique ID.
      3. You use the detect_moderation_labels API operation to analyze the picture (JPG, PNG). To use the picture from the Amazon S3 bucket, specify the bucket name and key of the file inside the API call as shown in lines 7 and 8.
 1 import boto3 2 import uuid 3 4 def lambda_handler(event, context): 5 if event: 6 7 bucket_name = "a2idemorekog". # Add your sourcebucketname 8 src_filename = "1.png". # Add the src filename 9 rekognition = boto3.client('rekognition') 10 human_loop_unique_id = str(uuid.uuid4()) + '1' 11 humanLoopConfig = { 12 'FlowDefinitionArn':"arn:aws:sagemaker:us-east-1:123456789123:flow-definition/a2i-rekognition-wf", 13 'HumanLoopName':human_loop_unique_id 14 } 15 16 response = rekognition.detect_moderation_labels( 17 Image = { 18 "S3Object": { 19 "Bucket": bucket_name, 20 "Name": src_filename, 21 } 22 }, 23 HumanLoopConfig = humanLoopConfig 24 )

Calling Amazon Rekognition using Lambda

To configure and run a serverless function, complete the following steps:

  1. On the Lambda console, choose your function.
  2. Choose Configure test events from the drop-down menu.

build alerting and human review for images using amazon rekognition and amazon a2i 11 hyperedge embed image

The editor appears to enter an event to test your function.

  1. On the Configure test event page, select Create new test event.
  2. For Event template, choose hello-world.
  3. For Event name, enter a name; for example, DemoEvent.
  4. You can change the values in the sample JSON. For this use case, no change is needed.

For more information, see Run a Serverless “Hello, World!” and Create a Lambda function with the console.

build alerting and human review for images using amazon rekognition and amazon a2i 12 hyperedge embed image

  1. Choose Create.
  2. To run the function, choose Test.

When the test is complete, you can view the results on the console:

  • Execution result – Verifies that the test succeeded
  • Summary – Shows the key information reported in the log output
  • Log output – Shows the logs the Lambda function generated

build alerting and human review for images using amazon rekognition and amazon a2i 13 hyperedge embed image

The response to this call contains the inference from Amazon Rekognition and the evaluated activation conditions that may or may not have led to a human loop creation. If a human loop is created, the output contains HumanLoopArn. You can track its status using the Amazon A2I API operation DescribeHumanLoop.

Completing a human review of your image

To complete a human review of your image, complete the following steps:

  1. Open the URL in the email you received.

You see a list of reviews you are assigned to.

  1. Choose the image you want to review.
  2. Choose Start working.

build alerting and human review for images using amazon rekognition and amazon a2i 14 hyperedge embed image

After you start working, you must complete the task within 60 minutes.

  1. Choose an appropriate category for the image.

build alerting and human review for images using amazon rekognition and amazon a2i 15 hyperedge embed image

Before choosing Submit, if you go to the Human review workflow page on the Amazon SageMaker console and choose the human review workflow you created, you can see a Human loops summary section for that workflow.

build alerting and human review for images using amazon rekognition and amazon a2i 16 hyperedge embed image

  1. In your worker portal, when you’re done working, choose Submit.

After you complete your job, the status of the human loop workflow is updated.

If you navigate back to the Human review workflow page, you can see the human loop you just completed has the status Completed.

build alerting and human review for images using amazon rekognition and amazon a2i 17 hyperedge embed image

Processing the output

The output data from your review is located in Bucket when you configured your human review workflow on the Amazon A2I console. The path to the data uses the following pattern: YYYY/MM/DD/hh/mm/ss.

build alerting and human review for images using amazon rekognition and amazon a2i 18 hyperedge embed image

The output file (output.json) is structured as follows:

{ "awsManagedHumanLoopRequestSource": "AWS/Rekognition/DetectModerationLabels/Image/V3", "flowDefinitionArn": "arn:aws:sagemaker:us-east-1:111122223333:flow-definition/a2i-rekog-blog", "humanAnswers": [ { "answerContent": { "AWS/Rekognition/DetectModerationLabels/Image/V3": { "moderationLabels": [ { "name": "Weapon Violence", "parentName": "Violence" }, { "name": "Violence", "parentName": "" } ] } }, "submissionTime": "2020-05-27T15:44:39.726Z", "workerId": "000cd1c234b5fcc7", "workerMetadata": { "identityData": { "identityProviderType": "Cognito", "issuer": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_00aa00a", "sub": "b000a000-0b00-0ae0-bf00-0000f0bfd00d" } } } ], "humanLoopName": "389fd1a7-c658-4020-8f73-e9afcbfa8fd31", "inputContent": { "aiServiceRequest": { "humanLoopConfig": { "flowDefinitionArn": "arn:aws:sagemaker:us-east-1:111122223333:flow-definition/a2i-rekog-blog", "humanLoopName": "389fd1a7-c658-4020-8f73-e9afcbfa8fd31" }, "image": { "s3Object": { "bucket": "AWSDOC-EXAMPLE-BUCKET", "name": "1.png" } } }, "aiServiceResponse": { "moderationLabels": [ { "confidence": 80.41172, "name": "Weapon Violence", "parentName": "Violence" }, { "confidence": 80.41172, "name": "Violence", "parentName": "" } ], "moderationModelVersion": "3.0" }, "humanTaskActivationConditionResults": { "Conditions": [ { "And": [ { "ConditionParameters": { "ConfidenceLessThan": 100, "ModerationLabelName": "*" }, "ConditionType": "ModerationLabelConfidenceCheck", "EvaluationResult": true }, { "ConditionParameters": { "ConfidenceGreaterThan": 60, "ModerationLabelName": "*" }, "ConditionType": "ModerationLabelConfidenceCheck", "EvaluationResult": true } ], "EvaluationResult": true } ] }, "selectedAiServiceResponse": { "moderationLabels": [ { "confidence": 80.4117202758789, "name": "Weapon Violence", "parentName": "Violence" }, { "confidence": 80.4117202758789, "name": "Violence", "parentName": "" } ], "moderationModelVersion": "3.0" } }
}

In this JSON object, you have all the input and output content in one place so that you can parse one file to get the following:

  • humanAnswers – Contains answerContent, which lists the labels chosen by the human reviewer, and workerMetadata, which contains information that you can use to track private workers
  • inputContent – Contains information about the input data object that was reviewed, the label category options available to workers, and the responses workers submitted

For more information about the location and format of your output data, see Monitor and Manage Your Human Loop.

Conclusion

This post has merely scratched the surface of what Amazon A2I can do. Amazon A2I is available in 12 Regions. For more information, see Region Table. To learn more about the Amazon Rekognition DetectModerationLabels API integration with Amazon A2I, see Use Amazon Augmented AI with Amazon Rekognition.

For video presentations, sample Jupyter notebooks, or more information about use cases like document processing, object detection, sentiment analysis, text translation, and others, see Amazon Augmented AI Resources.


About the Author

build alerting and human review for images using amazon rekognition and amazon a2i 19 hyperedge embed imageSuresh Patnam is a Solutions Architect at AWS. He helps customers innovate on the AWS platform by building highly available, scalable, and secure architectures on Big Data and AI/ML. In his spare time, Suresh enjoys playing tennis and spending time with his family.

Source: AWS