Deploying OpenStack Using Docker in Production

  • Published on
    15-Feb-2017

  • View
    480

  • Download
    2

Embed Size (px)

Transcript

Deploying OpenStack Using Docker in Production

ERIC:Introductions

This talk is about deploying *OpenStack* with Docker, not deploying docker containers *with* OpenStack

OverviewThe Pain of Operating OpenstackPossible SolutionsWhy Docker WorksWhy Docker Doesnt WorkDocker @ TWCLessons Learned

The Pain of Operating OpenstackPossible SolutionsWhy Docker WorksWhy Docker Doesnt WorkDocker @ TWCLessons Learned

Docker in production in July 2015First service was DesignateAdded Heat, Nova and KeystoneNova using Ceph and Solidfire BackendsNeutron in progressGlance and Cinder later this yearUsing Docker 1.10 and Docker Registry V2Docker & OpenStack @ TWC

Just a bit of backgroundWe first started using Docker in production in July of last yearFirst service we deployed with Docker was DesignateFollowed by Heat, Nova, then KeystoneWith Nova we did a two stage deploy process for control node services followed by compute a while laterWith Nova were running Ceph and Solidfire as storage backendsIt *is* possible to get nova-compute and iscsi working inside a docker containerWell be moving Neutron into Docker next, then coming back to Glance and CinderThe primary short term driver for Neutron is OVS agent restart fixes, which cause small outagesChanges have largely been merged in the last couple of months, but we are trying to use stable release branches for running prod. Seeing these changes get merged into stable branches nowUsing Docker 1.10 with Docker Registry V2

Started with packages for deploymentsDont like big-bang upgradesWant to be able to carry local patchesWant to run mixed versions of servicesSmaller upgrades, more oftenHow Did We End Up Here?

So how did we end up deploying OpenStack services with Docker?Weve traditionally used packages for deploymentsOver time we realized packages really werent meeting our requirements very wellPackages tend to lead to a big bang type of upgrade.We run multiple services on the same set of control servers, and when doing upgrades our API outages are longer and riskier than we wanted them to beWe want to be able to carry local patches and cherry-pick fixes from master branchesMany times we run into a bug, find it on launchpad and see that a fix is committed on master, but not backported. Or a fix is backported, but the package is not ready yet.We can do some of those backports ourselvesWe dont want to have to run the same version of OpenStack for all servicesFor example, were much more aggressive about upgrading services like Horizon and Heat than Nova and NeutronWe want to upgrade services independently of each other.We also want to follow stable updates more aggressively than distros doOnly a few stable releases are done over the six month lifetime of an OpenStack releaseDistros usually lag behind those by weeks, if not longerWe want to do smaller upgrades, more often, and one or two services at a time.So you may be thinking: Why cant you do this with:Packages? Virtualenvs, etc?We looked at and tried some different options

Why Not Packages?Built packages for KeystoneWorked for local patchesWorked for updating stable branchesDoesnt work for mixed releasesLimited by distro python packagingPackaging workflow is a painPackages slow down your workflowPackage may not exist yet

We tried packages for KeystoneWe took the packages from Canonical, replaced the source in them, left them mostly the same otherwiseThis worked reasonably well for carrying patches, worked well for stable updatesThis didnt work well for mixed openstack releasesWith normal distro packaging, you cant have two versions of the same python library installed at the same timeThere are significant conflicts in library requirements across OpenStack releasesBecause of this we were still dependent on Canonical for packaging the python libraries that the services depended on.Package workflow on Debian/Ubuntu isnt rocket science, but it clearly hasnt changed much in the last ten years. I hate it.There are certain times where we want the latest and greatest of some python library, which may not even have a package built for it. If you use pip install to install python libraries in system space, there is no telling what you might end up with - especially installing from git urls

Why Not Python Virtual Envs?Deployed Designate with Virtual EnvsMirrored Python packages internallyBuilt Virtual Envs on serversWas slow to deployStill have to install/manage non-Python deps

Another option you sometime hear people using is Python virtual environmentsWe use virtual environments for horizon, it probably has the most dependenciesWe originally deployed Designate using Python virtual environments, because there were no packages availableWe mirrored the python packages internally, built them into wheels, created the virtualenvs on the servers at deploy timeThis met most of our requirements, but: Was slowHad issues with python modules that required external commands, shared libraries, etcStill an issue with shared dependencies, such as an oslo library that reads from a shared location on the filesystem like /etc/nova/foo etc

Why Docker?

Everyone Else Is Doing It?

Everyone else is doing it?Im only kind of kidding hereYes, you may have weird problems with Docker in some cases, but nearly every problem weve had, other people have had also.Its getting betterIts being actively developed and its maturing at an impressive pace.Packaging tools arent improving, and openstThere arent lots of mature toolchains deploying python-based virtual environments across dev, staging, and prod.Dont discount the value of following the crowd in this case.Besides, youre running OpenStack, already right? Youre used to deploying software to production that has what we might call a quirky personality?

Reproducible buildsEasy to distribute artifactsContains all dependenciesEasy to install multiple versions of an imageWhy Docker?

But aside from that, why docker?Being able to reproduce builds and deployment are really important for usWhen we do a build were able to encapsulate everything that is needed to run that serviceWhen we do a deploy, were only dependent on our internal Docker registryIts easy to automate building and distributing docker images.And when you do build your images, it solves the issue of needing to manage shared libraries and other dependencies. Its all inside the image.Its also easy to install multiple versions of a Docker image on a given serverWhen weve done upgrades in the past, the majority of the time to do the upgrades is the package download, install and configuration timeWith Docker we can prestage the new image.An upgrade just ends up being running database migrations, making any needed config changes and starting the service with the new image.

Restarting docker restarts containers Intermittent bugginessComplex services are hard to fit into DockerRequires new tooling for build/deployment/etcWhy Not Docker?

So why wouldnt you want to use Docker for deploying OpenStack?Restarting docker restarts all containers - Fixed in some future versionThis can be a major issue for things like the Neutron OVS agentDocker does have bugs:Weve seen intermittent issues with the aufs backendWeve also seen intermittent issues on new installs with the docker bridge not being configured correctlyHowever, weve been about to work around these relatively minor issuesSome services like keystone or heat are pretty easy to get into a containerHowever, more complex services like Neutron require a lot of specific configuration in order to talk to OVS and create network namespaces,etcNova requires special configuration for talking to storage and libvirt, etcAlso, unless youre already deploying services with Docker, youre going to need some new toolingThis includes building images, installing them and making sure they run. This is yet another thing to manage and versionFor example, the existing Puppet modules for OpenStack dont have any direct Docker support. Thats something were maintaining ourselves, but well talk about that more in a bit.Lets talk a little bit about how we deploy OpenStack using Docker at Time Warner Cable

Docker @ TWC: ImagesBuilding base images using debootstrapBuild openstack-dev image based on thatContains all common depsImage per OpenStack ServicePer service base requirements.txt and a frozen oneFrozen requirements.txt is used for image buildsUses upper-constraints.txt for frozen requirements1https://github.com/openstack/requirements/blob/master/upper-constraints.txt

CLAYTON: So weve covered some background and reasons why and why not to use Docker, so lets talk about how were deploying services using Docker, starting with how we build our Docker images

We build our base images from an internal Ubuntu mirror using debootstrapWe build an image we call openstack-dev on top of thatThis is a relatively fat image that all OpenStack services are built on top ofThis includes all the shared libraries and command-line tools needed by any serviceFrom there we build per service images (so nova image, keystone image, etc)One key thing here is that we want to be very explicit about what version of dependencies were going to build the image with so that we have reproducible resultsTo achieve that, we have two requirements.txt files, one is very high level, and the other contains all dependencies pinned to specific versionsFor example, the high level requirements.txt for nova pulls in nova itself, the mysql driver, the memcache client and some internal plugins weve developed.From that high level requirements file, we have a tool that builds a Python virtual environment locallyWe build that virtualenv using the upper-constraints.txt file from the upstream infra projectThat ensures were using tested and supported versions of the libraries going into itFrom that virtual environment we generate a frozen requirements.txt file that has all the required libraries pinned to a specific versionBoth the high level and frozen requirements.txt files are checked in along with the DockerfileMake you have a plan for updating your base images. Docker images are another thing to update when new bugs or security issues are announcedYou want to make sure youre only changing things you intend to

Docker @ TWC: Image Tags

Tag should:Identify OpenStack service versionIdentify tooling versionBe automatically generatedBe unique

Another thing you need to think about for images is, how are you going to tag, or version them?When we started thinking about how we wanted to tag our images, there were a few thing we wanted the tag to do:It should be obvious which version of the service we were using, ideally to a specific commitIt should also clearly identify the version of the tooling was used to generate the imageIf the Dockerfile changes, then the image tag should change alsoIt should be automatically generated. We didnt want to rely on people to update the tagsLastly, every image generated should have a unique tag. We didnt want ambiguity about which version of a tag was the right one

Docker @ TWC: Image Tags

5.0.1-9-g0441ca8.16.dd354045.0.1-9-g0441ca816dd35404git-describe for HeatTooling # commitsTooling commit hash

This is an example of what we came up withThis is the tag were using for our Heat image currently:The first part here is the output of the git-describe command for the Heat commit that weve put in this imageThis includes the closest tag: 5.0.1The number of commits since that tag: 9And the short hash of that git commitThe second part versions the dockerfile and associated scripting that goes along with itThis includes the number of commits weve had: 16And the short hash of the commit containing the Dockerfile and toolingWhen we deploy new images, we always pin to a specific tag. We dont use the latest tag convention that is common on DockerHub

Docker @ TWC: Image DistributionUsing Docker Registry V2Registry using file backend for local storagePublish to master registry via JenkinsReplicate to registry mirrors via rsyncMirrors provide read-only access to imagesNo dependency on production environment

So as I mentioned before, were using the Open Source Docker Registry V2Weve setup basic auth for this with TLSWere using the file backend for this to store image data in local storageWhen a change is merged to git, a new image is automatically built by Jenkins, and then pushed into the Docker Registry masterNote that the Jenkins the only way to push images into the master repositoryAfter that image is pushed, another job kicks off in our two development sites and mirrors the data from the master to local mirrors via rsyncThese mirrors provide read-only access to the images and give us some measure of geographic redundancyOne key thing here is that our docker registries live in our development environments and dont backend into Swift or anything like that.We thought about using the Swift backend for our registry and intentionally decided not to have our production deploys depend on production being available.The scary scenario here is this: We use keystone auth for swift and we deploy keystone using docker. What if we have a bad keystone deploy? How do we fix that?

Docker @ TWC: DeploymentsImages installed with puppet-dockerManaged with twc-openstack/os_dockerWorked with Puppet OpenStack project to add hooks for software and service managementThe os_docker module uses these to extend OpenStack Puppet modules

Deployments: How do we actually get things running using these awesome docker images weve worked so hard on?Weve always been a Puppet shop, and were big fans of the Puppet OpenStack modulesHowever, these modules only support installing services from packagesWhen we first started down this path, we forked the Designate module, added support for non-package installsWe tried to contribute this upstreamGot complaints that it was very specific to our use case, wasnt likely to be useful to other peopleThese complaints were 100% validWhat we came up with after that was the idea of adding hooks to the upstream Puppet modules to allow making package and service management extensibleWe brought a proof of concept implementation to the Puppet OpenStack team and they were receptive to that idea, so we pursued itWe added this hooks support to the puppet-designate module, and created a new module that became the os_docker module.This os_docker module is our special sauce for deploying OpenStack services with DockerIt contains the glue needed to pull docker images, setup the init scripts, example config files and the CLI wrappers around docker Thing that that packages normally provideThe os_docker module is publically available in our github org, and weve tried to make it relatively unopinionated if youre interested in taking a lookSupports keystone, nova, designate and heat today, adding moreIntegrates with stock Puppet OpenStack modules

So everything hasnt been smooth sailing with our Docke...

Recommended

View more >