Workshop Consul .- Service Discovery & Failure Detection

Preview:

Citation preview

www.eleven-labs.com

WORKSHOP

Factory Vincent Composieux@vcomposieux

… to beyond and

CONSULSERVICE DISCOVERY&

FAILURE DETECTION

2013FIRST COMMIT

WHAT ABOUT CONSUL?

Open-source & built by HashiCorp.

“Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure.”

GOWRITTEN

WHAT ABOUT CONSUL?

FRONT 01 FRONT 02 FRONT 03

Terminal

$ curl http://frontend.eleven-labs.com ..

DNS API

UP UPDOWN

WHAT ABOUT CONSUL?

SERVICE DISCOVERY

➔ Register new services via configuration or API➔ Access all available services or a specific one➔ Updates automatically when new services are

available or not

FAILURE DETECTION

➔ Updates automatically Consul services when a service is down

➔ Manages services states (we can put a service in maintenance for instance)

CONSENSUSPROTOCOL

http://thesecretlivesofdata.com/raft/

WHAT ABOUT CONSUL?

GOSSIPPROTOCOL

This is for consistency: nodes inherit from a state: follower, candidate or leader.

Propagation of information (epidemy)

WHAT ABOUT CONSUL?

8600DNS

8500HTTP

83008400

RPC

Use port 8300 (TCP only) but also:

➔ 8301 (TCP/UDP, Gossip over LAN)➔ 8302 (TCP/UDP, Gossip over WAN)

This port exposes:

➔ A web UI➔ A HTTP API

This port is used for DNS server.

Possible to override with --dns-port

WHAT ABOUT CONSUL?

https://demo.consul.io

WEB UI LOOKS LIKE THIS

SERVICE DISCOVERYHANDS ON

swarm→ registrator→ ekofr/http-ip

swarm→ registrator→ ekofr/http-ip

ARCHITECTURE

CONSUL(Machine / Swarm Discovery)

NODE #01(Machine / Master)

NODE #02(Machine)

HTTP DNS

1

2 33DOCKER

MACHINES

1SWARM

CLUSTER

7DOCKER

CONTAINERS

CONSUL > Machine

Terminal

$ docker-machine create -d virtualbox consul ..

Create the “consul-master” machine under Docker,using the Virtualbox driver.

CONSUL(Machine)

CONSUL > Container

Terminal

$ eval $(docker-machine env consul)$ docker run -d \ -p 8301:8301 \ -p 8302:8302 \ -p 8400:8400 \ -p 8500:8500 \ -p 53:8600/udp consul ..

Enter your consul-master environment and run the “consul” Docker image (as server).

CONSUL(Machine)

NODE #01 > Machine (New tab)

Terminal

$ docker-machine create -d virtualbox \ --swarm \ --swarm-master \ --swarm-discovery="consul://$(docker-machine ip consul):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \ --engine-opt="cluster-advertise=eth1:2376" swarm-node-01 ..

Create the “swarm-node-01” machine under Dockerand map Swarm discovery with Consul.

CONSUL(Machine)

NODE #01

NODE #01 > Registrator

Terminal

$ eval $(docker-machine env swarm-node-01)

$ docker run -d \ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator \ -ip $(docker-machine ip swarm-node-01) \ consul://$(docker-machine ip consul):8500 ..

Enter “swarm-node-01” machine and run a Registrator Docker image as a daemon.

CONSUL(Machine)

NODE #01

NODE #01 > HTTP Container

Terminal

$ docker network create \ --subnet=172.18.0.0/16 network-node-01

$ docker run -d \ --net network-node-01 \ -p 80:8080 \ ekofr/http-ip ..

Create a “network-node-01” Docker network and run “ekofr/http-ip” using this network.

CONSUL(Machine)

NODE #01

NODE #02 > Machine (New tab)

Terminal

$ docker-machine create -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip consul):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \ --engine-opt="cluster-advertise=eth1:2376" swarm-node-02 ..

Create the “swarm-node-02” machine under Dockerand map Swarm discovery with Consul.

CONSUL(Machine)

NODE #01 NODE #02

NODE #02 > Registrator

Terminal

$ eval $(docker-machine env swarm-node-02)

$ docker run -d \ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator \ -ip $(docker-machine ip swarm-node-02) \ consul://$(docker-machine ip consul):8500 ..

Enter “swarm-node-02” machine and run a Registrator Docker image as a daemon.

CONSUL(Machine)

NODE #01 NODE #02

NODE #02 > HTTP Container

Terminal

$ docker network create \ --subnet=172.19.0.0/16 network-node-02

$ docker run -d \ --net network-node-02 \ -p 80:8080 \ ekofr/http-ip ..

Create a “network-node-02” Docker network and run “ekofr/http-ip” using this network.

CONSUL(Machine)

NODE #01 NODE #02

What’s happening on DNS?

Terminal

$ dig @$(docker-machine ip consul) http-ip.service.consul ..

;; QUESTION SECTION:;http-ip.service.consul. IN A

;; ANSWER SECTION:http-ip.service.consul. 0 IN A 192.168.99.100http-ip.service.consul. 0 IN A 192.168.99.102

Let’s make a DNS call to ensure that our “http-ip” service is available under 2 machines! Shutdown them!

DNS

What’s happening on DNS?

Terminal

$ dig @$(docker-machine ip consul) http-ip.service.consul SRV ..

;; ANSWER SECTION:http-ip.service.consul. 0 IN SRV 1 1 80 c0a86366.addr.dc1.consul.http-ip.service.consul. 0 IN SRV 1 1 80 c0a86364.addr.dc1.consul.

SRV records allows to define a priority and a weight for DNS entries but it is not supported by Consul at this time.

You can find more information on SRV records on Wikipedia.

Add DNS to your system

Let’s make an HTTP call to ensure that both nodes answers. Add Consul DNS server as a resolver.

CONSUL(Machine)

Call HTTP service

Terminal

$ curl http://http-ip.service.consulhello from 172.18.0.2

$ curl http://http-ip.service.consul .. hello from 172.19.0.2

Now, perform your HTTP request and confirm that you are balanced between your two machines.

HTTP DNS

FAILURE DETECTIONHANDS ON

NODE #01 > Add a HTTP check

Terminal

$ eval $(docker-machine env swarm-node-01)

$ docker kill \ $(docker ps -q --filter='ancestor=ekofr/http-ip') ..

First, kill the docker container that runs ekofr/http-ip.We will launch it just after with a health check.

NODE #01 > Add a HTTP check

Terminal

$ docker run -d \ --net network-node-01 -p 80:8080 \ -e SERVICE_CHECK_SCRIPT="curl -s -f http://$(docker-machine ip swarm-node-01)" \ -e SERVICE_CHECK_INTERVAL=5s \ -e SERVICE_CHECK_TIMEOUT=1s \ ekofr/http-ip ..

Add a check to the ekofr/http-ip container.We add a HTTP check here but it could be what you want.

More information about Registrator available environment variables here.More information on Consul check definition here.

NODE #02 > Add a HTTP check

Terminal

$ eval $(docker-machine env swarm-node-02)

$ docker kill \ $(docker ps -q --filter='ancestor=ekofr/http-ip') ..

First, kill the docker container that runs ekofr/http-ip.We will launch it just after with a health check.

NODE #02 > Add a HTTP check

Terminal

$ docker run -d \ --net network-node-02 -p 80:8080 \ -e SERVICE_CHECK_SCRIPT="curl -s -f http://$(docker-machine ip swarm-node-02)" \ -e SERVICE_CHECK_INTERVAL=5s \ -e SERVICE_CHECK_TIMEOUT=1s \ ekofr/http-ip ..

Add a check to the ekofr/http-ip container.We add a HTTP check here but it could be what you want.

More information about Registrator available environment variables here.More information on Consul check definition here.

Check services health via web UI

If you launch the UI, you should see your health checks:

Check services health via API

Terminal

$ curl http://$(docker-machine ip consul):8500/v1/health/checks/http-ip ..[ { "Status": "passing", "Output": "hello from 172.18.0.2", "ServiceName": "http-ip", }, …]

Note that you can also check your services’s health via the Consul API “/health” endpoint:

THANK YOU

Recommended