35
How OpenStack is implemented at GMO Public Cloud service GMO Inetnet, Inc. Technical Evangelist Hironobu Saitoh GMO Internet, Inc. Architect Naoto Gohko

How OpenStackis implemented at GMO Public Cloud …schd.ws/hosted_files/openstacksummitoctober2015tokyo/55/...How OpenStackis implemented at GMO Public Cloud service GMO Inetnet, Inc

  • Upload
    lyngoc

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

How OpenStack is implemented at GMO Public Cloud service

GMO Inetnet, Inc.

Technical EvangelistHironobu Saitoh

GMO Internet, Inc.

ArchitectNaoto Gohko

Agenda

• About us• Hosting/Cloud services in our business segments

• OpenStack• Why we use OpenStack

• Technical background• About difference of two services(ConoHa and GMO apps cloud)

About GMO Internet

http://gmo.jp/en

Japan’s Leading All-­in Provider of Internet Services

Business Segments

Infrastructure Business

Using OpenStack at GMO Internet

Public Clouds

We are offering four public cloud services.

プライベートクラウド基盤として

GMOグループの「GMOペパボ株式会社」がプライベートクラウドとして利用

Why we use OpenStack

Feature linenups

Loosely coupled components

Open Source Software

Most of features needed for Cloud develop were already implemented

Different engineering team cloud develop each features simultaneously.

Enable engineering team to add specific features when we want.

コンポーネント ConoHaの機能 リージョン

Keystone Account management,authentication All regions

Nova Virtual Machine All regions

Neutron Private networkingAssign IP address for VM All regions

Cinder Block storage All regionsSwift Object store Tokyo

Glance Create VM imageAuto Backup All regions

CeilometerCollect customer usage dataCooperate with our payment

systemTokyo

Heat Initialize VM by cloud-init All regionsHorizon (Staff only) All regions

Using OpenStack Components(ConoHa)

Develop OpenStack related tools

Tool that create Docker host.

Golang

Develop Vagrant provider for ConoHa.

Fix a problem and pull request.

Docker Machine

https://github.com/hironobu-­s/vagrant-­conoha

CLI tool that handle ConoHa specific APIs

Golang

Develop plugin that enable to save media files to Swift(Object Store)

Develop OpenStack related tools

https://github.com/hironobu-­s/conoha-­iso

https://wordpress.org/plugins/conoha-­object-­sync/

Finally

• About us (Hironobu Saitoh)• Hosting/Cloud services in our business segments

• OpenStack (Hironobu Saitoh)• Why we use OpenStack

• Technical background (Naoto Gohko)• About difference of two services(ConoHa and GMO apps cloud)

Oname.com VPS(Diablo) • Service XaaS model:

– VPS (KVM, libvirt)• Network:

– 1Gbps• Network model:

– Flat-­VLAN (Nova Network)– IPv4 only

• Public API– None (only web-­panel)

• Glance– None

• Cinder– None

• ObjectStorage– None

OpenStack service: Onamae.com VPS(Diablo)

Oname.com VPS(Diablo) • Nova Network:

– very simple(LinuxBridge)– Flat networking is scalable.

èBut There is no added value, such as a free configuration of the network

OpenStack service: Onamae.com VPS(Diablo)

ConoHa(Grizzly)• Service XaaS model:

– VPS + Private networks (KVM + libvirt)• Network:

– 10Gbps wired(10GBase-­T)• Network model:

– Flat-­VLAN + Quantam ovs-­GRE overlay– IPv6/IPv4 dualstack

• Public API– None (only web-­panel)

• Glance– None

• Cinder– None

• ObjectStorage– Swift (After Havana)

OpenStack service: ConoHa(Grizzly)

ConoHa(Grizzly)• Quantam Network:

– It was using the initial version of the Open vSwitch full mesh GRE-­vlan overlay network

èButWhen the scale becomes large, Localization occurs to a specific node of the communication of the GRE-­mesh-­tunnel(with under cloud network(L2) problems)(Broadcast storm?)

OpenStack service: ConoHa(Grizzly)

GMO AppsCloud(Havana)• Service XaaS model:

– KVM compute + Private VLAN networks + Cinder + Swift• Network:

– 10Gbps wired(10GBase SFP+)• Network model:

– IPv4 Flat-­VLAN + Neutron LinuxBridge(not ML2) + Brocade ADX L4-­LBaaS original driver• Public API

– Provided the public API• Ceilometer• Glance

– Provided(GlusterFS)• Cinder

– HP 3PAR(Active-­Active Multipath original) + NetApp• ObjectStorage

– Swift cluster • Bare-­Metal Compute

– Modifiyed cobbler bare-­metal deploy driver.

OpenStack service: GMO AppsCloud(Havana)

GMO AppsCloud(Havana) public API

Web panel(httpd, php)

API wrapper proxy(httpd, phpFramework: fuel php)

HavanaNova API

Customer sys API

HavanaNeutron API

HavanaGlance API

OpenStack API for input validation

Customer DB

HavanaKeystone API

OpenStack API

HavanaCinder API

HavanaCeilometer API

Endpoint L7:reverse proxy

HavanaSwift Proxy

GMO AppsCloud(Havana) public API

Havana: baremetal compute cobbler driver

Havana: baremetal compute cobbler driverBaremetal net:• Bonding NIC• Taged VLAN• allowd VLAN + dhcp native VLAN

Swift cluster (Havana to Juno upgrade)

SSD storage:container/account server at every zone

Havana: baremetal compute Cisco iOS in southboundhttps://code.google.com/p/cisco-ios-cli-automation/

OpenStack Juno: 2 service cluster, released

Mikumo ConoHa Mikumo Anzu

Mikumo = 美雲= Beautiful cloud

New Juno region released: 10/26/2015

• Service model: Public cloud by KVM• Network: 10Gbps wired(10GBase SFP+)• Network model:

– Flat-­VLAN + Neutron ML2 ovs-­VXLAN overlay + ML2 LinuxBridge(SaaS only)

– IPv6/IPv4 dualstack• LBaaS: LVS-­DSR(original)• Public API

– Provided the public API (v2 Domain)• Compute node: ALL SSD for booting OS

– Without Cinder boot • Glance: provided• Cinder: SSD NexentaStore zfs (SDS)• Swift (shared Juno cluster)• Cobbler deply on under-­cloud

– Ansible configuration• SaaS original service with keystone auth

– Email, web, CPanel and WordPress

OpenStack Juno: 2 service cluster, released

• Service model: Public cloud by KVM• Network: 10Gbps wired(10GBase SFP+)• Network model:

– L4-­LB-­Nat + Neutron ML2 LinuxBridge VLAN– IPv4 only

• LBaaS: Brocade ADX L4-­NAT-­LB(original)• Public API

– Provided the public API• Compute node: Flash cached or SSD• Glance: provided (NetApp offload)• Cinder: NetApp storage• Swift (shared Juno cluster)• Ironic on under-­cloud

– Compute server deploy with Ansible config• Ironic baremetal compute

– Nexsus Cisco for Tagged VLAN module– ioMemory configuration

Compute and Cinder(zfs): SSDToshiba enterprise SSD• The balance of cost and performance we have taken.• Excellent IOPS performance, low latency

Compute local SSDThe benefits of SSD of Compute of local storage• The provision of high-­speed storage than cinder boot.

• It is easy to take online live snapshot of vm instance.• deployment of vm is fast.

ConoHa: Compute option was modified:• take online live snapshot of vm instance.

http://toshiba.semicon-­storage.com/jp/product/storage-­products/publicity/storage-­20150914.html

NexentaStor zfs cinder: ConoHa cloud(Juno)

Compute

Designate DNS: ConoHa cloud(Juno)

Client API

DNSIdentifyEndpoint

StorageDB

OpenStackKeystone

BackendDB

RabbitMQ

Central

Components of the DNS and GSLB(original) back-­end services

NetApp storage: GMO Appscloud(Juno)If you are using the same Cluster onTAPNetApp a Glance and Cinder storage, it is possible to offload a copy of the inter-­service of OpenStack as the processing of NetApp side.

• Create volume from glance image

((glance the image is converted (ex: qcow2 to raw) required that does not cause the condition)

Ironic with undercloud: GMO Appscloud(Juno)For Compute server deployment.Kilo Ironic and All-­in-­one• Compute server: 10G boot• Clout-­init: network• Compute setup: Ansible

Under-­cloud Ironic(Kilo):It will use a different network and Ironic Baremetal dhcp for Service baremetalcompute Ironic(Kilo).

Ironic(Kilo) baremetal: GMO Appscloud(Juno)Boot baremetal instance• baremetal server(with Fusion ioMemory SanDisk)

• 1G x4 bonding + Tagged VLAN• Clout-­init: network + lldp• Network: Nexsus CiscoAllowd VLAN security

Ironic Kilo + Juno: Fine• Ironic Python driver• Whole Image write

• Service model: Public cloud by KVM• Network: 10Gbps wired(10GBase SFP+)• Network model:

– Flat-­VLAN + Neutron ML2 ovs-­VXLAN overlay + ML2 LinuxBridge(SaaS only)

– IPv6/IPv4 dualstack• LBaaS: LVS-­DSR(original)• Public API

– Provided the public API (v2 Domain)• Compute node: ALL SSD for booting OS

– Without Cinder boot • Glance: provided• Cinder: SSD NexentaStore zfs (SDS)• Swift (shared Juno cluster)• Cobbler deply on under-­cloud

– Ansible configuration• SaaS original service with keystone auth

– Email, web, CPanel and WordPress

OpenStack Juno: 2 service cluster, released

• Service model: Public cloud by KVM• Network: 10Gbps wired(10GBase SFP+)• Network model:

– L4-­LB-­Nat + Neutron ML2 LinuxBridge VLAN– IPv4 only

• LBaaS: Brocade ADX L4-­NAT-­LB(original)• Public API

– Provided the public API• Compute node: Flash cached or SSD• Glance: provided (NetApp offload)• Cinder: NetApp storage• Swift (shared Juno cluster)• Ironic on under-­cloud

– Compute server deploy with Ansible config• Ironic baremetal compute

– Nexsus Cisco for Tagged VLAN module– ioMemory configuration

Finally:The GMO AppsCloud in Juno OpenStack it was released on 10/27/2015.

• Deployment of SanDisk Fusion ioMemory by Kilo Ironic on Juno OpenSack I can also.

• Compute server was deployed by Kilo Ironic with under-­cloud All-­in-­One openstack. Compute server configuration was deployed by Ansible.

• Cinder and Glance was provied NetApp copyoffload storage mechanism. • LbaaS is Brocade ADX NAT mode original driver.

On the otherhand;;Juno OpenStack ConoHa released on 05/18/2015.

• Designate DNS and GSLB service was started on ConoHa.• Cinder storage is SDS provied NexentaStor zfs storage for single volume type.• LBaaS is LVS-­DSR original driver.