Upload
yoshitaka-fujii
View
408
Download
5
Embed Size (px)
Citation preview
Let's Build a Serverless Architecture
in Scala!Yoshitaka Fujii (@yoshiyoshifujii)
2017 ScalaMatsuriSaturday, February 25th
16:30 - 17:10 Conference Room 1
Who am I ?
• Yoshitaka Fujii (@yoshiyoshifujii)
• Joined MOTEX in April, 2014
• Software Engineer
• Scala(19 months)
• Scala Kansai summit 2016 staff.
自己紹介。エムオーテックス株式会社。Scala歴19ヶ月。Scala関西サミット2016でスタッフさせていただきました。
Agenda
• Serverless Architecture
• Development & Deployment
• DDD
Serverless Architecture
• using FaaS
• a significant reduction in costs
• scalability
• the lack of operating and maintaining infrastructure
サーバレスアーキテクチャとは、FaaSを活用したシステム。コストの大幅な削減。スケーラビリティ。 インフラの運用管理が不要。
Function as a Service
• AWS Lambda
• Google Cloud Function
• Microsoft Azure Functions
AWS Lambda
• AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
• executes your code only when needed.
• scales automatically, a few requests per day to thousands per second.
• Node.js, Java, C# and Python
AWS Lambda はサーバーをプロビジョニングしたり管理しなくてもコードを実行できるコンピューティングサービスです。必要なときにコードを実行し、自動的にスケールします。
AWS Lambda Events
• API Gateway (HTTP Requests)
• Kinesis Streams
• DynamoDB
• S3
• Schedule (CloudWatch)
• SNS
• IoT
これらのAWSサービスのイベントに応じてAWS Lambdaを実行できます。今回は、主にAPI GatewayとKinesis Streamsについて取り上げます。
Amazon API Gateway
• Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
• Low-Cost and Efficient.(only for calls made to your APIs and data transfer out)
• Performance at Any Scale
• Run Your APIs Without Servers
フルマネージドサービスで、開発者はどのようなスケールであっても、簡単に API の作成、配布、保守、監視、保護が行えます。
https://aws.amazon.com/jp/api-gateway/details/
Amazon Kinesis Streams
• Use Amazon Kinesis Streams to collect and process large streams of data records in real time.
• rapid and continuous data intake and aggregation.
• Accelerated log and data feed intake and processing.
• Real-time metrics and reporting.
• Real-time data analytics.
• Complex stream processing.
Amazon Kinesis Streams を使用して、データレコードの大量のストリームをリアルタイムで収集し、処理します。 高速かつ継続的にデータの取り込みと集約を行うことができます。
https://docs.aws.amazon.com/ja_jp/streams/latest/dev/key-concepts.html
Amazon API Gatewayclient
AWSLambda
AmazonS3
AmazonDynamoDB
Amazon Kinesis
AWSLambda
AWSLambda
Amazon Elasticsearch Service
Context + TokenPrincipal + Policy
Policy is cached
Denied 403
Allowed
Auth function
Consumers function
System diagrams
Amazon API Gateway
clientAWS
Lambda
AmazonS3
AmazonDynamoDB
Amazon Kinesis
AWSLambda
Amazon Elasticsearch ServicePolicy is
cached
Large scale system
client
client
client
client
client
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
AWSLambda
大規模なシステムを想定しています。
Large scale system
• More functional requirements.
• AWS Lambda wants to be a simple function.
• 1 Lambda per method request. GET:/hello => getHello POST:/hello => postHello
• develop and deploy efficiently
大規模なシステムは、機能要件が多い。AWS LambdaはシンプルなFunctionを保ちたい。1メソッドリクエストにつきLambdaを1つ。また効率良く開発とデプロイをしたい。
Development & Deployment
実際に開発してデプロイするあたりを紹介します。
How to make simple AWS Lambda.
単純なAWS Lambdaの作り方を紹介します。
RequestStreamHandler.java
public interface RequestStreamHandler { /** * Handles a Lambda Function request * @param input The Lambda Function input stream * @param output The Lambda function output stream * @param context The Lambda execution environment context object. * @throws IOException */ public void handleRequest(InputStream input, OutputStream output, Context context) throws IOException; }
AWS Lambdaの公開インタフェースです。この中身を実装すると任意の処理が実行できます。
RequestHandler.java
public interface RequestHandler<I, O> { /** * Handles a Lambda Function request * @param input The Lambda Function input * @param context The Lambda execution environment context object. * @return The Lambda Function output */ public O handleRequest(I input, Context context); }
I/Oを任意の型で実装できますが、Java Beanなクラスもしくは、プリミティブな型のみになっており、Javaの制約を受けたりするので、あまりオススメしません。
build.sbt
lazy val root = (project in file(".")). settings( name := "aws-lambda-scala", organization := "com.example", scalaVersion := "2.12.1", libraryDependencies ++= Seq( "com.amazonaws" % "aws-lambda-java-core" % "1.1.0" ) )
project/plugins.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")
src/main/scala/com/example/Hello.scala
class HelloHandler extends RequestStreamHandler { @throws(classOf[java.io.IOException]) override def handleRequest(input: InputStream, output: OutputStream, context: Context): Unit = { val bytes = toByteArray(input) output.write(bytes) } def toByteArray(input: InputStream) = Stream.continually(input.read).takeWhile(_ != -1).map(_.toByte).toArray }
assembly
$ sbt > assembly [info] Packaging .../target/scala-2.12/aws-lambda-scala-assembly-0.1.0-SNAPSHOT.jar
AWS CLI - lambda create function
$ aws lambda create-function \ --region us-east-1 \ --function-name aws-lambda-scala \ --zip-file fileb://aws-lambda-scala-assembly-0.1.0-SNAPSHOT.jar \ --role arn:aws:iam::${AWS Account ID}:role/lambda_basic_execution \ --handler com.example.HelloHandler::handleRequest \ --runtime java8 \ --timeout 15 \ --memory-size 512
AWS CLI - lambda create function
{ "LastModified": "2017-01-02T12:34:56.789+0000", "FunctionName": "aws-lambda-scala", "Runtime": "java8", "Version": "$LATEST", "Role": "arn:aws:iam::${AWS Account ID}:role/lambda_basic_execution", "CodeSha256": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "Handler": "com.example.HelloHandler::handleRequest", "Timeout": 15, "Description": "", "MemorySize": 512, "FunctionArn": "arn:aws:lambda:us-east-1:${AWS Account ID}:function:aws-lambda-scala", "CodeSize": 5264477 }
AWS CLI - lambda invoke
$ aws lambda invoke \ --region us-east-1 \ --function-name aws-lambda-scala \ --payload 123 \ --invocation-type RequestResponse /tmp/response
{ "StatusCode": 200 }
123
Amazon API Gatewayclient
AWSLambda
AmazonS3
AmazonDynamoDB
Amazon Kinesis
AWSLambda
AWSLambda
Amazon Elasticsearch Service
Context + TokenPrincipal + Policy
Policy is cached
Denied 403
Allowed
Auth function
Consumers function
System diagrams
Labor …
• It is hard to run Lambda.
• I will `assembly` one by one and upload it.
• It is troublesome to configure the API Gateway on the console.
Lambdaで実行するまでが大変。assemblyしてアップロードしてを繰り返さないとうまく動くか分からない。コンソール上でAPI Gatewayの設定を行うのは面倒。
sbt-aws-serverless plugin
• sbt plugin to deploy code to Amazon API Gateway and AWS Lambda.
• https://github.com/yoshiyoshifujii/sbt-aws-serverless
自前で開発したsbt-pluginを使います。API GatewayやLambdaにデプロイするpluginです。
serverless framework
• To be honest this way is better.
• However, blue green deployment is difficult.
正直、serverless frameworkを使うのが良いと思います。ただ、ブルーグリーンデプロイができない印象です。
https://martinfowler.com/bliki/BlueGreenDeployment.html
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_1
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1
Blue-Green deployment - 1st release
AmazonDynamoDB
Tableaccount-testaccount-prod
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_12 /hello hello:${stageVariables.env}_2
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1test_2 2
Blue-Green deployment - 2nd release test
AmazonDynamoDB
Tableaccount-testaccount-prod
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_12 /hello hello:${stageVariables.env}_2
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1test_2 2prod_2 2
AmazonDynamoDB
Tableaccount-test
account-production
Blue-Green deployment - 2nd release
in this case
• Release the test environment as it is.
• Quickly return to the previous version.
• A/B Testing.
• Canary Release.https://martinfowler.com/bliki/CanaryRelease.html
この方式なら、テストした環境をそのままリリースできる。すぐに前のバージョンに戻すことができる。A/Bテストや、カナリア・リリースに使える。
Giter8
• https://github.com/yoshiyoshifujii/sbt-aws-serverless-ddd.g8
$ sbt new yoshiyoshifujii/sbt-aws-serverless-ddd.g8 name [My Something Project]: version [0.1.0-SNAPSHOT]: organization [com.example]: package [com.example]:
Giter8でサンプルプロジェクトを公開しました。
Domain Driven Design
DDD - Hexagonal architecture
• Implementing Domain-Driven Design - 2013/2/16
• Vaughn Vernon
• Mitigate vendor lock-in.
DDDのヘキサゴナルアーキテクチャを採用する。ベンダーロックインを緩和できる。
Dependency model
Application
Infrastructure
Domain
API
依存モデル。
Dependency model
Application
Infrastructure
Domain
API
APIとApplication層は、ベンダーロックイン。ドメインは、再利用可能。インフラは一部再利用可能。
vendor lock-in
Amazon API Gatewayclient
AWSLambda
AmazonS3
AmazonDynamoDB
Amazon Kinesis
AWSLambda
AWSLambda
Amazon Elasticsearch Service
Context + TokenPrincipal + Policy
Policy is cached
Denied 403
Allowed
Auth function
Consumers function
System diagramsMake this part
build.sbt - dependsOn
lazy val domain = (project in file("./modules/domain")).
lazy val infraDynamo = (project in file("./modules/infrastructure/dynamodb")). dependsOn(domain).
lazy val infraKinesis = (project in file("./modules/infrastructure/kinesis")). dependsOn(domain).
lazy val appHello = (project in file("./modules/application/hello")). dependsOn(infraLambda, infraDynamo, infraKinesis).
マルチプロジェクトの依存関係を依存モデルの通りに設定。
Project tree
• Sources under modules.
• application is a module of Lambda.
• Make the Infrastructure as finely as possible.
• Reduce module size.
プロジェクト構成。ソースはmodulesディレクトリの配下に置く。applicationがLambdaのモジュールになる。Infrastructureは限りなく小さく作る。Lambdaのモジュールサイズを小さくするため。
domain
• POSO (Plain old Scala object)
• Do not specify anything for libraryDependencies.
domain層。POSOで作る。libraryDependenciesに何も指定しない。
Account
case class AccountId(value: String) extends ValueObject[String]
case class Account( id: AccountId, version: Option[Version], email: String, password: String, name: String) extends Entity[AccountId]
AccountRepository
trait AccountRepository { def save(account: Account): Either[DomainError, Account]
def get(id: AccountId): Either[DomainError, Option[Account]]
def findBy(email: String): Either[DomainError, Option[Account]]
def findAll: Either[DomainError, Seq[Account]] }
AccountEventPublisher
case class AccountModified( accountId: AccountId, email: String, password: String, name: String)
trait AccountEventPublisher {
def publish(modified: AccountModified): Either[DomainError, AccountModified]
}
infrastructure
• implementation of Repository or Domain Event.
• DI with Cake pattern.
• It has no direct inheritance relation.
infrastructure層。RepositoryやDomainEventの実装を持つ。Cake patternでDIされる。直接の継承関係を持たない。
AccountRepositoryOnDynamoDB
trait AccountRepositoryOnDynamoDB {
def save(account: Account): Either[DomainError, Account] = ???
def get(id: AccountId): Either[DomainError, Option[Account]] = ???
def findBy(email: String): Either[DomainError, Option[Account]] = ???
def findAll: Either[DomainError, Seq[Account]] = ??? }
AccountEventPublisherOnKinesis
trait AccountEventPublisherOnKinesis {
def publish(modified: AccountModified): Either[DomainError, AccountModified] = ???
}
application
• Generate Lambda's module.
• Dependency injection.
application層。Lambdaのモジュールを生成する。依存性が注入する。
Base
trait Base extends BaseStreamHandler {
val accountRepository: AccountRepository val accountEventPublisher: AccountEventPublisher
override def handle(input: Input): Try[String] = Try { JsObject( "message" -> JsString("hello world!!") ).compactPrint } }
App
class App extends Base {
override val accountRepository = new AccountRepository with AccountRepositoryOnDynamoDB
override val accountEventPublisher = new AccountEventPublisher with AccountEventPublisherOnKinesis
}
Amazon API Gatewayclient
AWSLambda
AmazonS3
AmazonDynamoDB
Amazon Kinesis
AWSLambda
AWSLambda
Amazon Elasticsearch Service
Context + TokenPrincipal + Policy
Policy is cached
Denied 403
Allowed
Auth function
Consumers function
System diagramsMake this part
build.sbt - dependsOn
lazy val domain = (project in file("./modules/domain")).
lazy val infraDomain = (project in file("./modules/infrastructure/domain")). dependsOn(domain).
lazy val appAccountModified = (project in file(“./modules/application/accountmodified”)). dependsOn(infraLambdaConsumer, infraDomain).
依存関係は、API Gatewayとほぼ同じ。
Demo
sbt deploy prod
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > deploy v1 API Gateway created: xxxxxxxxxx API Gateway put: xxxxxxxxxx Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth Lambda published: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:1 Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:prod Authorizer: auauau Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello Lambda published: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:1 Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:prod_1 Create Deployment: {Id: dddddd,Description: 0.1.0-SNAPSHOT,CreatedDate: Sat Feb 25 12:34:56 JST 2017,}
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_1
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1
Blue-Green deployment - 1st release
AmazonDynamoDB
Tableaccount-testaccount-prod
sbt deploy test
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > deploy test API Gateway created: xxxxxxxxxx API Gateway put: xxxxxxxxxx Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth Lambda published: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:2 Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:test Authorizer: auauau Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello Lambda published: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:2 Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:test_2 Create Deployment: {Id: dddddd,Description: 0.1.0-SNAPSHOT,CreatedDate: Sat Feb 25 12:34:56 JST 2017,}
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_12 /hello hello:${stageVariables.env}_2
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1test_2 2
Blue-Green deployment - 2nd release test
AmazonDynamoDB
Tableaccount-testaccount-prod
sbt deployCopy test to prod
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > deployCopy test prod Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:prod Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:prod_2 Stage: prod
Amazon API Gateway
Stage Stage Variablestest env : testprod env : prod
Deployment Resources Lambda ARN1 /hello hello:${stageVariables.env}_12 /hello hello:${stageVariables.env}_2
AWSLambda
Alias Publish Versions
dev $LATESTtest_1 1prod_1 1test_2 2prod_2 2
AmazonDynamoDB
Tableaccount-test
account-production
Blue-Green deployment - 2nd release
sbt deployDev
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > deployDev dev API Gateway created: xxxxxxxxxx API Gateway put: xxxxxxxxxx Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-auth:dev Authorizer: auauau Lambda deployed: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello Lambda Alias: arn:aws:lambda:us-east-1:aaaaaaaaaaaa:function:$name-app-hello:dev Create Deployment: {Id: ddddd,Description: 0.1.0-SNAPSHOT,CreatedDate: Sat Feb 25 12:34:56 JST 2017,}
sbt deployList
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > deployList dev ============================================================================================================ xxxxxxxxxx ============================================================================================================ | Stage Name | Last Updated Date | Deployment Id | Description | |----------------------|--------------------------------|-----------------|--------------------------------| | dev | Mon Feb 20 21:30:07 JST 2017 | qirn6v | null | | v1 | Mon Feb 20 21:23:41 JST 2017 | 8tb1zr | null | ===================================================================================== xxxxxxxxxx ===================================================================================== | Created Date | Deployment Id | Description | |--------------------------------|-----------------|--------------------------------| | Mon Feb 20 21:30:07 JST 2017 | ddddd2 | 0.1.0-SNAPSHOT | | Mon Feb 20 21:23:41 JST 2017 | ddddd1 | 0.1.0-SNAPSHOT |
sbt invoke
$ sbt \ -DAWS_ACCOUNT_ID=<AWS Account ID> \ -DAWS_ROLE_ARN=arn:aws:iam::<AWS Account ID>:role/<Role NAME> \ -DAWS_BUCKET_NAME=<BUCKET NAME> > invoke prod ============================================================ GET:https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/hellos ============================================================ 200 {"message":"hello world!!"}
Summary
• Scala is easy to combine DDD and Serverless.
• Easy to share processing with multiple projects.
• Cheap, scalable and easy to operate.
ScalaはDDDとServerlessを組み合わせるとやりやすい。マルチプロジェクトで処理を共通化しやすい。安くてスケーラブルで運用が楽。