umma.dev

AWS: Machine Learning and AI

Automating Image Analysis with Amazon Rekognition

Explore Amazon Rekognition Label Detection

  • Go to Amazon Rekognition and click Try demo
  • Upload an image by clicking Upload or drag and drop
  • Analyse the results

Create an S3 Bucket

  • Create a bucket with a unique name and leave all other options as default
  • Create two folders within the buckets named image-uploaded and rekognition-ouput

Create a Lambda Function

  • Function name: <lambda-function-name>
  • Runtime: Python 3.8
  • Execution role: use an existing role or create one
  • Click Create function
  • Add an S3 trigger
    • Navigate to properties tab and create an event notification
    • Event name: <event-name>
    • Prefix: image-uploaded/
    • Suffix: .jpg
    • Object created: check put
    • Destination
      • Lambda function
  • Configure Lambda function and replace python script with:
import json
import boto3

s3_client = boto3.client('s3')
rekognition_client = boto3.client('rekognition')

def lambda_handler(event, context):
    # Get the bucket name and object key from the event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Call Rekognition to detect labels
    response = rekognition_client.detect_labels(
        Image={
            'S3Object': {
                'Bucket': bucket,
                'Name': key
            }
        },
        MaxLabels=10
    )

    # Save the response to the output folder
    output_key = 'rekognition-output/' + key.split('/')[-1] + '.json'
    s3_client.put_object(
        Bucket=bucket,
        Key=output_key,
        Body=json.dumps(response, indent=4)
    )

    return {
        'statusCode': 200,
        'body': json.dumps('Image processed and labels detected successfully')
    }

Testing

  • Upload an image to image-uploaded folder
  • Verify the output
  • Download and check json file to confirm it matches rekognition response

Sentiment Analysis of Text Files with Amazon Comprehend

Set Up an S3 Bucket

  • Create a bucket with a unique name with two folders, comprehend-text-input and comprehend-analysis-output

Create a Lambda Function

  • Function name: <lambda-function-name>
  • Execution role: choose an existing role or create a new one
  • Click Create function
  • Replace default Python script:
import json
import boto3

# Initialize S3 and Comprehend clients
s3_client = boto3.client('s3')
comprehend_client = boto3.client('comprehend')

def lambda_handler(event, context):
    # Extract bucket and key (file name) from the S3 event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Get the text content from the uploaded file in S3
    response = s3_client.get_object(Bucket=bucket, Key=key)
    text = response['Body'].read().decode('utf-8')

    # Perform sentiment analysis using Comprehend
    sentiment_response = comprehend_client.detect_sentiment(
        Text=text,
        LanguageCode='en'  # Set the language of the text
    )

    # Save the sentiment analysis result to the output bucket
    output_key = 'comprehend-analysis-output/' + key.split('/')[-1].replace('.txt', '') + '-sentiment.json'

    s3_client.put_object(
        Bucket=bucket,
        Key=output_key,
        Body=json.dumps(sentiment_response, indent=4)
    )

    return {
        'statusCode': 200,
        'body': json.dumps('Sentiment analysis completed successfully.')
    }
  • Click Deploy
  • Adjust timeout to 1 min: configuration tab > general configuration > timeout

Add S3 Trigger to Lambda

  • Go back to the S3 bucket and go to the properties tab
  • Create an event notification with the following config:
    • Event name: text-upload-event
    • Prefix: comprehend-text-input/
    • Event type: put
    • Destination: <lambda-function-name>
    • Click Save changes

##Β Test the Lambda Function

  • Upload a text file to the comprehend-text-input folder
  • Verify the sentiment analysis result has created a new JSON file
  • Download and review the file with the scores

Convert Text to Speech with Amazon Polly

Create an S3 Bucket

  • Create a bucket with a unique name
  • Create two folders called, polly-text-input and polly-audio-output

Create a Lambda Function

  • Name: <lamba-func-name>
  • Runtime: Python 3.12
  • Execution role: choose an existing role or create a new one
  • Click Create function
  • Replace the default Python script with:
import json
import boto3

# Initialize S3 and Polly clients
s3_client = boto3.client('s3')
polly_client = boto3.client('polly')

def lambda_handler(event, context):
    # Extract bucket and key (file name) from the S3 event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Get the text content from the uploaded file in S3
    response = s3_client.get_object(Bucket=bucket, Key=key)
    text = response['Body'].read().decode('utf-8')

    # Convert the text to speech using Polly
    polly_response = polly_client.synthesize_speech(
        Engine='standard',
        Text=text,
        OutputFormat='mp3',
        VoiceId='Joanna'  # You can choose a different voice from Polly
    )

    # Save the audio file to a different S3 bucket
    output_key = 'polly-audio-output/' + key.split('/')[-1].replace('.txt', '') + '.mp3'

    s3_client.put_object(
        Bucket=bucket,
        Key=output_key,
        Body=polly_response['AudioStream'].read()
    )

    return {
        'statusCode': 200,
        'body': json.dumps('Text to speech conversion completed successfully.')
    }
  • Click Deploy
  • Adjust timeout to 1 min: configuration tab > general configuration > timeout

Add an S3 Trigger to Lambda

  • Go to the properties tab within S3 and create an event notification with the following config:
    • Name: text-upload-event
    • Prefix: polly-text-input/
    • Event type: put
    • Destination: Lambda function
  • Save changes

Test the Lambda Function

  • Upload a text file into the polly-text-input/ folder
  • Verify an MP3 file has been created within the polly-audio-output/ folder