Masterclass Webinar: Application Services and Dynamic Dashboard

Preview:

DESCRIPTION

These slides from our Applications Services and Dyanmic Dashboard webinar introduce an example configuration that brings together topics from previous Masterclasses such as Auto Scaling and AWS S3, but adding event notification via Simple Notification Service, persistence of events in AWS Simple Queuing Service and API access from Python. YouTube Demo: http://youtu.be/lb9qPhxIVNI

Citation preview

Masterclass

Application Services and Dynamic Dashboard

Ryan Shuttleworth – Technical Evangelist @ryanAWS

A technical deep dive beyond the basics Help educate you on how to get the best from AWS technologies

Show you how things work and how to get things done Broaden your knowledge in ~45 mins

Masterclass

A grand title for a demonstration system that ties together services Show you how to use services like SNS and SQS to carry AWS events

Use S3 as a web server to host dynamically generated content Why? Show you some tips and tricks you can use in your projects

Application Services & Dynamic Dashboard

So what are we going to run through?

Services & topics

EC2 Instances to run our application code

SNS To publish events from our instances

Autoscaling To generate events as our application scales up & down

SQS To persist the event messages for processing

S3 To store and serve content we create

CloudFormation To build our system as a managed stack

DynamoDB To store all events from application

IAM To control the creation and management of resources

To do what? Mimic an application that implements auto-

scaling

Trap, transport and store the scaling events produced

Use a simple technique to produce pseudo-dynamic content from S3

To do what? Mimic an application that implements auto-

scaling

Trap, transport and store the scaling events produced

Use a simple technique to produce pseudo-dynamic content from S3

An exercise beyond compute and storage!

There’s a movie you can view:

http://youtu.be/lb9qPhxIVNI I’ll show this link again at the end

This demo is just an illustration of what you can do with these services

Built in this way…

Built in this way…

Auto scaling Group

An arbitrary application that we

can scale up and down

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

Built in this way…

Messages produced when instances are

started or terminated

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

DynamoDB table holding instance details

Monitoring instance

Auto scaling Group

Built in this way…

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

DynamoDB table holding instance details

S3 bucket holding dashboard web content

Auto scaling Group

Monitoring instance

Built in this way…

EC2 Instance Contents

DynamoDB table holding instance details

Monitoring instance

Python script Read SQS queue and generate data for S3

Static site HTML, Javascript, css

reading data file

Built in this way…

SNS & SQS Essential glue between applications

Simple Notification Service

Reliable Redundant storage

Scalable Unlimited number of messages

Simple CreateTopic, Subscribe, Publish

Flexible HTTP, Email, SQS Secure

Topic policies

Integrated EC2, CloudWatch, Auto Scaling

A subscription by an SQS queue for messages

published on this topic

A CloudWatch alarm that will publish to

an SNS topic

Simple Queue Service

Reliable Queues store messages across

availability zones

Scalable Designed for unlimited services

reading unlimited number of messages

Simple CreateQueue, SendMessage,

ReceiveMessage, DeleteMessage

Inexpensive Low per request fees Secure

Authentication

Performance Excellent throughput

Python Application A

Python Application A

>>> import boto

>>> conn = boto.connect_sqs()

>>> q = conn.create_queue('myqueue')

Python Application A

>>> import boto

>>> conn = boto.connect_sqs()

>>> q = conn.create_queue('myqueue')

>>> from boto.sqs.message import Message

>>> m = Message()

>>> m.set_body('This is my first message.')

>>> status = q.write(m)

Python Application A

Python Application B

>>> import boto

>>> conn = boto.connect_sqs()

>>> q = conn.create_queue('myqueue')

>>> from boto.sqs.message import Message

>>> m = Message()

>>> m.set_body('This is my first message.')

>>> status = q.write(m)

>>> m = q.read(60)

>>> m.get_body()

Message not visible to other applications for 60 seconds

Python Application A

Python Application B

X

>>> import boto

>>> conn = boto.connect_sqs()

>>> q = conn.create_queue('myqueue')

>>> from boto.sqs.message import Message

>>> m = Message()

>>> m.set_body('This is my first message.')

>>> status = q.write(m)

>>> m = q.read(60)

>>> m.get_body()

>>> q.delete_message(m)

Message not visible to other applications for 60 seconds

The core trick…

Auto-scaling SNS SQS

Event notifications

Persistence of event data

The core trick…

Auto-scaling SNS SQS

Event notifications

Persistence of event data

You can do this from anything to anything

as-create-launch-config

--image-id <ami id>

--instance-type t1.micro

--group <security group>

--launch-config my-launch-cfg

Create a Auto Scaling launch configuration:

as-create-auto-scaling-group my-as-group

--availability-zones <az list>

--launch-configuration my-launch-cfg

--max-size 20

--min-size 1

Create a Auto Scaling group:

Amazon Resource Name

as-put-notification-configuration my-as-group

--topic-arn <arn-from-SNS-topic>

--notification-types

autoscaling:EC2_INSTANCE_LAUNCH,

autoscaling:EC2_INSTANCE_TERMINATE

as-put-notification-configuration my-as-group

--topic-arn <arn-from-SNS-topic>

--notification-types

autoscaling:EC2_INSTANCE_LAUNCH,

autoscaling:EC2_INSTANCE_TERMINATE

autoscaling:EC2_INSTANCE_LAUNCH autoscaling:EC2_INSTANCE_LAUNCH_ERROR autoscaling:EC2_INSTANCE_TERMINATE autoscaling:EC2_INSTANCE_TERMINATE_ERROR

A subscription by an SQS queue for messages

published on this topic

{

"Type" : "Notification",

"MessageId" : <message id>,

"TopicArn" : <arn>,

"Subject" : "Auto Scaling: termination for group \"SNS-Dashboard-ASG\"",

"Message" : ”…",

"Timestamp" : "2013-05-21T09:13:09.555Z",

"SignatureVersion" : "1",

"Signature" : ”…",

"SigningCertURL" : "https://sns.us-east-

1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem",

"UnsubscribeURL" : "https://sns.us-east-

1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-

1:241861486983:SNS-Dashboard-ASNotifications-7GU41DCQW8HC:ed30bf6e-582c-4fd2-8e07-

28f7d1ac6278"

}

{

"Type" : "Notification",

"MessageId" : <message id>,

"TopicArn" : <arn>,

"Subject" : "Auto Scaling: termination for group \"SNS-Dashboard-ASG\"",

"Message" : ”…",

"Timestamp" : "2013-05-21T09:13:09.555Z",

"SignatureVersion" : "1",

"Signature" : ”…",

"SigningCertURL" : "https://sns.us-east-

1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem",

"UnsubscribeURL" : "https://sns.us-east-

1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-

1:241861486983:SNS-Dashboard-ASNotifications-7GU41DCQW8HC:ed30bf6e-582c-4fd2-8e07-

28f7d1ac6278"

}

{\"StatusCode\":\"InProgress\",\"Service\":\"AWS Auto Scaling\",\"AutoScalingGroupName\":\"SNS-Dashboard-ApplicationServerGroup-K61R5797WCMA\",\"Description\":\"Terminating EC2 instance: i-8bb679eb\",\"ActivityId\":\"dfc1181b-0df8-47dc-aa8d-79e13b8a33d1\",\"Event\":\"autoscaling:EC2_INSTANCE_TERMINATE\",\"Details\":{},\"AutoScalingGroupARN\":\"arn:aws:autoscaling:us-east-1:241861486983:autoScalingGroup:77ef2778-ded1-451a-a630-6a35c8e67916:autoScalingGroupName/SNS-Dashboard-ApplicationServerGroup-K61R5797WCMA\",\"Progress\":50,\"Time\":\"2013-05-21T09:13:09.442Z\",\"AccountId\":\"241861486983\",\"RequestId\":\"dfc1181b-0df8-47dc-aa8d-79e13b8a33d1\",\"StatusMessage\":\"\",\"EndTime\":\"2013-05-21T09:13:09.442Z\",\"EC2InstanceId\":\"i-8bb679eb\",\"StartTime\":\"2013-05-21T09:12:20.323Z\",\"Cause\":\"At 2013-05-21T09:12:02Z a user request explicitly set group desired capacity changing the desired capacity from 5 to 1. At 2013-05-21T09:12:19Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 5 to 1. At 2013-05-21T09:12:19Z instance i-8fdbafed was selected for termination. At 2013-05-21T09:12:19Z instance i-8ddbafef was selected for termination. At 2013-05-21T09:12:20Z instance i-8bb679eb was selected for termination. At 2013-05-21T09:12:20Z instance i-85b778e5 was selected for termination.\"}

http://bootstrapping-

assets.s3.amazonaws.com/as-register-

instances.template

Template available:

Subscription

Notification

Topic

SQS Queue

Auto Scaling

Group

Subscription

Notification

Topic

SQS Queue

Auto Scaling

Group

"NotificationConfiguration" : {

"TopicARN" : { …"ASNotifications"

},

"NotificationTypes”

[ "autoscaling:EC2_INSTANCE_LAUNCH”,

"autoscaling:EC2_INSTANCE_TERMINATE" ]

},

Adding a notification to a topic for Auto Scaling group

events

Subscription

Notification

Topic

SQS Queue

Auto Scaling

Group

"ASNotifications" : {

"Type" : "AWS::SNS::Topic",

"Properties" : {

"Subscription" : [ {

"Endpoint" : { … ASNotificationsQueue …},

"Protocol" : "sqs"

} ]

}

},

"NotificationConfiguration" : {

"TopicARN" : { …"ASNotifications"

},

"NotificationTypes”

[ "autoscaling:EC2_INSTANCE_LAUNCH”,

"autoscaling:EC2_INSTANCE_TERMINATE" ]

},

Subscription to topic from an SQS queue

We now have events in SQS Let’s do something with them…

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

Built in this way…

Messages produced when instances are

started or terminated

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

DynamoDB table holding instance details

Monitoring instance

Auto scaling Group

Built in this way…

Auto scaling Group

SNS notification from auto scaling

group SNS event body as JSON

SQS queue to persist event

DynamoDB table holding instance details

S3 bucket holding dashboard web content

Auto scaling Group

Monitoring instance

Built in this way…

EC2 Instance Contents

DynamoDB table holding instance details

Monitoring instance

Python script Read SQS queue and generate data for S3

Static site HTML, Javascript, css

reading data file

Built in this way…

Read messages

from SQS queue

Write data to

DynamoDB table

Form JSON file

from updated

results

Write file to S3

for javascript to

interpret

http://bootstrapping-

assets.s3.amazonaws.com/as-node-manager.py

Script available

1. Read SQS Queue

# Connect to SQS and open queue

sqs = boto.connect_sqs()

queue = sqs.get_queue(sqs_queue_name)

queue.set_message_class(RawMessage)

while True:

rs = queue.get_messages(num_messages=10)

for raw_message in rs:

# Parse JSON message

envelope = json.loads(raw_message.get_body())

message = json.loads(envelope['Message'])

# Trap the EC2_INSTANCE_LAUNCH event

if message['Event'] == 'autoscaling:EC2_INSTANCE_LAUNCH':

save_instance(message['EC2InstanceId'], ddb_table_name)

# Trap the EC2_INSTANCE_TERMINATE event

elif message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE':

delete_instance(message['EC2InstanceId'], ddb_table_name)

# Delete the message from the queue and continue polling

queue.delete_message(raw_message)

# Connect to SQS and open queue

sqs = boto.connect_sqs()

queue = sqs.get_queue(sqs_queue_name)

queue.set_message_class(RawMessage)

while True:

rs = queue.get_messages(num_messages=10)

for raw_message in rs:

# Parse JSON message

envelope = json.loads(raw_message.get_body())

message = json.loads(envelope['Message'])

# Trap the EC2_INSTANCE_LAUNCH event

if message['Event'] == 'autoscaling:EC2_INSTANCE_LAUNCH':

save_instance(message['EC2InstanceId'], ddb_table_name)

# Trap the EC2_INSTANCE_TERMINATE event

elif message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE':

delete_instance(message['EC2InstanceId'], ddb_table_name)

# Delete the message from the queue and continue polling

queue.delete_message(raw_message)

# Connect to SQS and open queue

sqs = boto.connect_sqs()

queue = sqs.get_queue(sqs_queue_name)

queue.set_message_class(RawMessage)

while True:

rs = queue.get_messages(num_messages=10)

for raw_message in rs:

# Parse JSON message

envelope = json.loads(raw_message.get_body())

message = json.loads(envelope['Message'])

# Trap the EC2_INSTANCE_LAUNCH event

if message['Event'] == 'autoscaling:EC2_INSTANCE_LAUNCH':

save_instance(message['EC2InstanceId'], ddb_table_name)

# Trap the EC2_INSTANCE_TERMINATE event

elif message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE':

delete_instance(message['EC2InstanceId'], ddb_table_name)

# Delete the message from the queue and continue polling

queue.delete_message(raw_message)

# Connect to SQS and open queue

sqs = boto.connect_sqs()

queue = sqs.get_queue(sqs_queue_name)

queue.set_message_class(RawMessage)

while True:

rs = queue.get_messages(num_messages=10)

for raw_message in rs:

# Parse JSON message

envelope = json.loads(raw_message.get_body())

message = json.loads(envelope['Message'])

# Trap the EC2_INSTANCE_LAUNCH event

if message['Event'] == 'autoscaling:EC2_INSTANCE_LAUNCH':

save_instance(message['EC2InstanceId'], ddb_table_name)

# Trap the EC2_INSTANCE_TERMINATE event

elif message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE':

delete_instance(message['EC2InstanceId'], ddb_table_name)

# Delete the message from the queue and continue polling

queue.delete_message(raw_message)

2. Write to DynamoDB

def save_instance(instance_id, ddb_table_name):

instance = get_instance(instance_id)

# Connect to DynamodB (using key from env) and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Create a new record for this instance

item = table.new_item(

hash_key=instance.id,

attrs = {

'pub_hostname': instance.public_dns_name,

'pub_ip': instance.ip_address,

'priv_hostname': instance.private_dns_name,

'priv_ip': instance.private_ip_address,

'ami_id': instance.image_id,

'region': instance.region.name,

'availability_zone' : instance.placement,

'terminated': 'false'

}

)

# Save the item to DynamoDB

item.put()

def save_instance(instance_id, ddb_table_name):

instance = get_instance(instance_id)

# Connect to DynamodB (using key from env) and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Create a new record for this instance

item = table.new_item(

hash_key=instance.id,

attrs = {

'pub_hostname': instance.public_dns_name,

'pub_ip': instance.ip_address,

'priv_hostname': instance.private_dns_name,

'priv_ip': instance.private_ip_address,

'ami_id': instance.image_id,

'region': instance.region.name,

'availability_zone' : instance.placement,

'terminated': 'false'

}

)

# Save the item to DynamoDB

item.put()

def save_instance(instance_id, ddb_table_name):

instance = get_instance(instance_id)

# Connect to DynamodB (using key from env) and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Create a new record for this instance

item = table.new_item(

hash_key=instance.id,

attrs = {

'pub_hostname': instance.public_dns_name,

'pub_ip': instance.ip_address,

'priv_hostname': instance.private_dns_name,

'priv_ip': instance.private_ip_address,

'ami_id': instance.image_id,

'region': instance.region.name,

'availability_zone' : instance.placement,

'terminated': 'false'

}

)

# Save the item to DynamoDB

item.put()

Item ‘fields’

Item key

def save_instance(instance_id, ddb_table_name):

instance = get_instance(instance_id)

# Connect to DynamodB (using key from env) and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Create a new record for this instance

item = table.new_item(

hash_key=instance.id,

attrs = {

'pub_hostname': instance.public_dns_name,

'pub_ip': instance.ip_address,

'priv_hostname': instance.private_dns_name,

'priv_ip': instance.private_ip_address,

'ami_id': instance.image_id,

'region': instance.region.name,

'availability_zone' : instance.placement,

'terminated': 'false'

}

)

# Save the item to DynamoDB

item.put()

63

def delete_instance(instance_id, ddb_table_name):

# Connect to DynamodB and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Get the item to soft delete

item = table.get_item(instance_id)

# Update the terminated flag

item['terminated'] = 'true'

# Save the item to DynamoDB

item.put()

def delete_instance(instance_id, ddb_table_name):

# Connect to DynamodB and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Get the item to soft delete

item = table.get_item(instance_id)

# Update the terminated flag

item['terminated'] = 'true'

# Save the item to DynamoDB

item.put()

def delete_instance(instance_id, ddb_table_name):

# Connect to DynamodB and get table

ddb = boto.connect_dynamodb()

table = ddb.get_table(ddb_table_name)

# Get the item to soft delete

item = table.get_item(instance_id)

# Update the terminated flag

item['terminated'] = 'true'

# Save the item to DynamoDB

item.put()

67

3. Write to S3

def write_instances_to_s3(instances_json, s3_output_bucket, s3_output_key):

# Connect to S3 and get the output bucket

s3 = boto.connect_s3()

output_bucket = s3.get_bucket(s3_output_bucket)

# Create a key to store the instances_json text

k = Key(output_bucket)

k.key = s3_output_key

k.set_metadata("Content-Type", "text/plain")

k.set_contents_from_string(instances_json)

def write_instances_to_s3(instances_json, s3_output_bucket, s3_output_key):

# Connect to S3 and get the output bucket

s3 = boto.connect_s3()

output_bucket = s3.get_bucket(s3_output_bucket)

# Create a key to store the instances_json text

k = Key(output_bucket)

k.key = s3_output_key

k.set_metadata("Content-Type", "text/plain")

k.set_contents_from_string(instances_json)

{

"instances":[

{

"id":"i-5525c932",

"terminated":"true",

"ami_id":"ami-7341831a",

"availability_zone":"us-east-1d",

"region":"RegionInfo:us-east-1",

"pub_ip":"107.21.167.7",

"pub_hostname":"ec2-107-21-167-7.compute-1.amazonaws.com",

"priv_ip":"10.201.2.233",

"priv_hostname":"domU-12-31-39-13-01-1B.compute-1.internal"

},

{

"id":"i-bb6a86dc",

"terminated":"false",

"ami_id":"ami-7341831a",

"availability_zone":"us-east-1a",

"region":"RegionInfo:us-east-1",

"pub_ip":"174.129.82.128",

"pub_hostname":"ec2-174-129-82-128.compute-1.amazonaws.com",

"priv_ip":"10.242.211.185",

"priv_hostname":"ip-10-242-211-185.ec2.internal"

}

]

}

…and now we have data in S3 we can build a web view…

Instances.txt

S3 bucket

From

monitoring

app

HTML/CSS/JS

Instances.txt

S3 bucket

From

monitoring

app

HTML/CSS/JS

Web page

load from

S3

Periodic refresh

Instances.txt

S3 bucket

From

monitoring

app

HTML/CSS/JS

Web page

load from

S3

jQuery get

instance

data from

S3

Periodic refresh

S3 Bucket

css

img

js

index.html

instances.txt

S3 Bucket

css

img

js

index.html

instances.txt JSON data

jQuery.getJSON()

jQuery.getJSON()

Javascript functions in index page hosted in S3

Built around an AJAX ‘get’ of instances JSON in S3

some jQuery table

selectors/modifiers

+

Want to try this yourself? View the video tutorial here:

http://youtu.be/lb9qPhxIVNI

And grab the CloudFormation template here: http://bootstrapping-assets.s3.amazonaws.com/as-

register-instances.template

1. Create security groups 2. Create a notification of type SQS 3. Create SQS queue 4. Create auto-scaling launch configs & groups 5. Add auto-scaling notifications to SQS SNS notification 6. Create S3 bucket 7. Create DynamoDB table 8. Start instances 9. Bootstrap monitoring application

Summary

Pub/Sub models with SNS S3 & pseudo dynamic content

More than compute & storage

Reliable delivery with SQS DynamoDB for high performance

Summary

Pub/Sub models with SNS S3 & pseudo dynamic content

More than compute & storage

Reliable delivery with SQS

Given you some ideas? Introduced you to some handy services? Helped you with some CloudFormation?

DynamoDB for high performance

Summary

Recommended