Upload
lamnga
View
243
Download
0
Embed Size (px)
Citation preview
Marc CurrySteve SpeicherOpenShift Product Management Team
What’s New in Red Hat OpenShift Container Platform 3.9
OpenShift Commons Briefing21 March 2018
SERVICE CATALOG(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)
SELF-SERVICE
APPLICATION LIFECYCLE MANAGEMENT(CI / CD)
BUILD AUTOMATION DEPLOYMENT AUTOMATION
CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER
NETWORKING SECURITYSTORAGE REGISTRYLOGS &
METRICS
CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT(KUBERNETES)
ATOMIC HOST /RED HAT ENTERPRISE LINUX
OCI CONTAINER RUNTIME & PACKAGING
INFRASTRUCTURE AUTOMATION & COCKPIT
OpenShift = Enterprise Kubernetes+Build, Deploy and Manage Containerized Apps
Q3 CY2017
Q4 CY2017
OpenShift Container Platform 3.6 (August)● Kubernetes 1.6 & Docker 1.12● New Application Services - 3Scale API Mgt
OnPrem, SCL 2.4● Web UX Project Overview enhancements● Service Catalog/Broker & UX (Tech Preview)● Ansible Service Broker (Tech Preview)● Secrets Encryption (3.6.1)● Signing/Scanning + OpenShift integration● Storage - CNS Gluster Block, AWS EFS, CephFS● OverlayFS with SELinux Support (RHEL 7.4)● User Namespaces (RHEL 7.4)● System Containers for docker
OpenShift Container Platform 3.7 (December)● Kubernetes 1.7 & Docker 1.12● Red Hat OpenShift Application Runtimes (GA)● Service Catalog/Broker & UX (GA)● OpenShift Ansible Broker (GA)● AWS Service Broker ● Network Policy (GA)● CRI-O (Tech Preview)● CNS for logging & metrics (iSCSI block), registry● CNS 3X density of PV’s (1000+ per 3 node, Integrated Install● Prometheus Metrics and Alerts (Tech Preview)
OpenShift Roadmap
Q1 CY2018
Q2 CY2018
OpenShift Container Platform 3.9 (March)● Kubernetes 1.8 and 1.9 and docker 1.13 ● CloudForms CM-Ops (CloudForms 4.6)● CRI-O (Full Support in z stream)● Device Manager (Tech Preview)● Central Auditing● Jenkins Improvements● HAProxy 1.8● Web Console Pod● CNS (Resize, vol custom naming, vol metrics)
OpenShift Container Platform 3.10 (June)● Kubernetes 1.10 and CRI-O and Buildah (Tech Preview)● Custom Metrics HPA● Smart Pruning● Istio (Dev Preview)● IPv6 (Tech Preview)● OVN (Tech Preview), Multi-Network, Kuryr, IP per Project● oc client for developers● AWS AutoScaling● Golden Image Tooling and TLS bootstrapping● Windows Server Containers (Dev Preview))● Prometheus Metrics and Alerts (GA)● OCP + CNS integrated monitoring/Mgmt, S3 Svc Broker
3
OCP 3.9 - Extensible Application Platform
● Service Expansion● Database APBs, SCL 3.0, Catalog view enhancement
● Security● Auditing, Jenkins secret integration, private repo ease of use
● Manageability● CFME 4.6, HAProxy 1.8, Egress port control, Soft Prune, PV resize
● Workload Diversity● Device Manager, Local Storage
● Container Runtime● CRI-O
EXCITING MIDDLEWARE SERVICES UPDATES
- high-performance rule processing service based on the Drools 7 community project, with extensions for complex event processing (CEP).
- guided rules editor, decision tables, and web-based rule authoring, testing, and deployment tools.
- business resource optimization tool based on the OptaPlanner community project.
- managed repository for rule definitions, with built-in governance workflows to ensure that changes and updates are properly controlled.
● Node core distro to be delivered only through RHOAR, no stand alone SKU○ Evaluating NPM modules for future support, with focus on microservice development and deployment concerns
● Non-Distro efforts○ Tooling & boosters for RHOAR integration
● Booster coverage○ Showcases features in Node.js specific to RHOAR/microservices○ Work continues on infrastructure/workflow
● Consumption○ S2I images (supported for v8, unsupported but available for v9/v10)○ Openshift Streams integration
EXCITING MIDDLEWARE SERVICES UPDATES
March 12th!
OPENSHIFT SERVICE CATALOG
OpenShiftAnsibleBroker
OpenShiftTemplateBroker
AWSServiceBroker
OtherServiceBrokers
ANSIBLE
OPENSHIFT
AMAZON WEB SERVICES
OTHER COMPATIBLE SERVICES
Ansible Playbook Bundles
OpenShiftTemplates
PublicCloudServices
OtherServices
SERVICE BROKERS
7
Self-Service / UXExpose and Provision Services
What’s New for 3.9: ● New upstream community website: Automation Broker
● Downstream will still be called ‘OpenShift Ansible Broker’ with main focus on APB ‘Service Bundles’ (application definition)● Community contributed application repo: https://github.com/ansibleplaybookbundle
● Support for running the broker behind an HTTP proxy in a restricted network environment● Documentation: https://github.com/openshift/ansible-service-broker/blob/master/docs/proxy.md ● Video: https://www.youtube.com/watch?v=-Fdfz1RqI94
● Plan or parameter updating of PostgreSQL, MariaDB, and MySQL APB-based services will preserve data● Update logic in the APB that handles preserving data; useful for cases where you want to move between a service plan with
ephemeral storage to a different service plan utilizing a PV● Video: https://www.youtube.com/watch?v=kslVbbQCZ8s&t=220s
● Now Official add-on for MiniShift● Documentation: https://github.com/minishift/minishift-addons/tree/master/add-ons/ansible-service-broker ● Video: https://www.youtube.com/watch?v=6QSJOyt1Ix8
● Network isolation support for multi-tenant environments● For joining networks that are isolated to allow APBs to talk to the resulting pods it creates over the network
● [Experimental] Async bind support in Broker● Used to allow binds that need more time to execute than the 60 seconds response time defined in the OSB API spec. ● Async bind will spawn a binding job and return the job token immediately; the catalog will use the last_operation to monitor the
state of the running job until either successful completion or a failure.
8
Feature(s): OpenShift Ansible Broker
Self-Service / UX
Feature(s): Catalog from within project view
Description: Quickly get to the catalog from within a project
How it Works:
● “Catalog” item in left navigation
Self-Service / UX
Feature(s): Quick search catalog from within project view
Description: Need to quickly find services
How it Works:
● Type in your search criteria● Get minimal service icon
Self-Service / UX
Feature(s): Select preferred home page
Description: Power users may want to jump straight certain pages after login
How it Works:
● Access the menu from account dropdown
● Pick any of: Catalog Home, All projects, Specific project
● Logout and then back in● Enjoy!
Self-Service / UX
Feature(s): Configurable inactivity timeout
Description: Configure web console to log user out after a set timeout
How it Works:
● Default is 0 (never)● Set ansible variable to # of minutes
Self-Service / UX
openshift_web_console_inactivity_timeout_minutes=n
Feature(s): Console as separate pod
Description: Separate web console out of API server
How it Works:
● Web console packaged as a container image
● Deployed as a pod● Configuration can be made via
ConfigMap and auto-detects changes
Self-Service / UX
Feature(s): StatefulSets out of tech preview
Description: Removed tech preview label
How it Works:
● Same capability as tech preview feature in 3.7
Self-Service / UX
DevExp / Builds
Feature(s): Jenkins memory usage improvements
Description: Jenkins worker pods often consume too much or too little memory
How it Works:
● Startup script intelligently looks at pod limits● JVM env vars appropriately set to ensure limits
are respected for spawned JVMs
● ‘oc cluster up’ allow for number of PVs to create
● Ability to specify default tolerations
● Toleration of CRI-O in build scenarios
● Secrets available in Jenkins as credentials
DevExp / Builds Miscellaneous
Dev Tools - Local Dev
Minishift 1.14 / CDK 3.3:● Many improvements around addons:
dependencies, management, …● Caching of container images● Static IP for HyperV● Host folder mounts using sshfs
Dev Tools - SCL 3.0!
3.4 10.2
1.12
7.1
9.6
8
NEW
3.6
UPD
ATE
D
Feature(s): Semi-automatic namespace-wide egress IP
Description: All outgoing external connections from a project will share a single fixed source IP address and will send all traffic via that IP, so that external firewalls can recognize the application associated with a packet.
Networking
How it Works: ● Supported by the multitenant / networkpolicy plugins● Egress IPs do not accept connections on any port● NetNamespace has an EgressIPs array that can be set
(though only one IP, currently) for the egress IP● The Egress IP must be on the local subnet of the node's
primary network interface (added as additional address on that interface)
● Once EgressIPs is set on a NetNamespace, and until the EgressIP is claimed, pod-to-pod traffic is allowed, but pod-to-external traffic is dropped
● Once claimed, a pod in that NetNamespace on that node will be able to send traffic to external IPs, with that EgressIP as the source of traffic
● For a pod in that NetNamespace on a different node, traffic will first travel via VXLAN to the node hosting the egress IP, then it will be able to send traffic to external IPs
● Egress traffic from pods in other NetNamespaces are still NAT’d to the primary IP address of the node, just like in the no-automatic-egress-IP case
3.7
Stability enhancements that will enable in 3.10:● HA● “Semi-Automatic” → “Automatic”
(no longer a manual admin process)
3.9
Feature(s): Support our own HAProxy RPM for consumption by the router
Description: Route configuration changes and process upgrades performed under heavy load have typically required a stop/start sequence of certain services, causing temporary outages. There existed iptables “trickery” to work around the issue.
In OpenShift 3.9, HAProxy 1.8 sees no difference between updates and upgrades; a new process is used with a new configuration, and the listening socket’s file descriptor is transferred from the old to the new process so the connection is never closed. The change is seamless, and enables our ability to do things, like HTTP/2, in the future.
Networking How the HAProxy “soft reload” used to work:
The new process with its new configuration tries to bind to all listening ports
The new process sends a signal to the old
process(es) asking it to temporarily release the port
Fail
Fail
Give up and signal the old process to continue taking
care of the incoming connections
Signal the old process it can quit once it has finished
serving existing connections
Succeed
Succeed
Try again
ports may not be bound by any process...
The new process listens for incoming
connections.
Feature(s): StatefulSets / DaemonSets / Deployments no longer Tech Preview
Description: The core workloads API, which includes the DaemonSet, Deployment, ReplicaSet and StatefulSet kinds, has been promoted to GA stability in upstream Kubernetes.
For OpenShift, this means that StatefulSets, DaemonSet and Deployments are now stable/supported and the Tech Preview label is removed in OpenShift 3.9.
Additional Information:● StatefulSets● DaemonSets● Deployments
Master
Feature(s): Central Audit Capability
Description: Provides auditing of items that admins would like to…
View (examples):● Event Timestamp● The activity that generated the entry● The API endpoint that was called● The HTTP output● The item changed due to an activity, with details of the change● The username of the user that initiated an activity ● The name of the namespace the event occurred in where possible● The status of the event, either success or failure
Master
Trace (examples):● User login and logout from (including session timeout) the
web interface, including unauthorised access attempts● Account creation, modification, or removal● Account role/policy assignment/de-assignment● Scaling of pods● Creation of new project or application● Creation of routes and services● Triggers of builds and/or pipelines● Addition/removal or claim of persistent volumes
How It Works:
Setup auditing in the master-config file, and restart the master-config service:
auditConfig: auditFilePath: "/var/log/audit-ocp.log" enabled: true maximumFileRetentionDays: 10 maximumFileSizeMegabytes: 10 maximumRetainedFiles: 10 logFormat: json policyConfiguration: null policyFile: /etc/origin/master/audit-policy.yaml webHookKubeConfig: "" webHookMode: ""
Feature(s): Add support for Deployments to oc status
Description: Provides similar output for upstream deployments as can be seen for downstream DeploymentConfigs, with nested deployment set.
How it Works:
Master
$ oc statusIn project My Project (myproject) on server https://127.0.0.1:8443
svc/ruby-deploy - 172.30.174.234:8080 deployment/ruby-deploy deploys istag/ruby-deploy:latest <- bc/ruby-deploy source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest build #1 failed 5 hours ago - bbb6701: Merge pull request #18 from durandom/master (Joe User <[email protected]>) deployment #2 running for 4 hours - 0/1 pods (warning: 53 restarts) deployment #1 deployed 5 hours ago
The old (pre-3.9) output:
$ oc-3.7 statusIn project dc-test on server https://127.0.0.1:8443
svc/ruby-deploy - 172.30.231.16:8080 pod/ruby-deploy-5c7cc559cc-pvq9l runs test
Feature(s): Dynamic Admission Controller follow-up
Description: An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized.
To assist admission controller developers, the upstream documentation has been enhanced and a blog post that explains how it works was created.
How it Works (example Use Cases): ● Mutation of pod resources● Security response
Master TechPreview
Feature(s): Feature Gates
Description: Platform admin now have the ability to turn off specific features for the entire platform. This will assist in controlling access to alpha, beta, or tech preview features in production clusters.
How it Works: Feature gates use a key=value pair in the master and kubelet config files that describes the feature you wish to block.
Master
Control Plane: master-config.yaml
kubernetesMasterConfig: apiServerArguments:
feature-gates:- CPUManager=true
kubelet: node-config.yaml
kubeletArguments: feature-gates: - DevicePlugin=true
Full list
Updated Reference Architecture Implementation GuidesRelease: ocpsupplemental-3.9 (4-6 weeks after 3.9 GA)
E2E Provider Integration
Deploy and Management of the following supported combinations:● OpenShift 3.9 on Red Hat OpenStack Platform 10 (RH-OSP)● OpenShift 3.9 on Amazon Web Services (AWS)● OpenShift 3.9 on Microsoft Azure● OpenShift 3.9 on VMWare vSphere● OpenShift 3.9 on Red Hat Virtualization 4.21 (RHV)● OpenShift 3.9 on Google Cloud Platform (GCP)2
Deprecation of unsupported “glue code” (ancillary scripts, ansible playbooks, related GitHub repos, …)● No longer required as we’re using the provisioner code provided by the installer itself● All cloud providers
1The release dates for the Ref Arch update and RHV 4.2 are very close, so this may fall back to 4.1.2At-risk.
Questions
SERVICE CATALOG(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)
SELF-SERVICE
APPLICATION LIFECYCLE MANAGEMENT(CI / CD)
BUILD AUTOMATION DEPLOYMENT AUTOMATION
CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER
NETWORKING SECURITYSTORAGE REGISTRYLOGS &
METRICS
CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT(KUBERNETES)
INFRASTRUCTURE AUTOMATION & COCKPIT
OpenShift = Enterprise Kubernetes+Build, Deploy and Manage Containerized Apps
ATOMIC HOST /RED HAT ENTERPRISE LINUX
OCI CONTAINER RUNTIME & PACKAGING
CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER
NETWORKING SECURITYSTORAGE REGISTRYLOGS &
METRICS
CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT(KUBERNETES)
Clustered Container InfrastructureApplications Run Across Multiple Containers & Hosts
ATOMIC HOST /RED HAT ENTERPRISE LINUX
OCI CONTAINER RUNTIME & PACKAGING
Feature(s): Kubernetes Upstream Red Hat Blog and Commons Webinar
Description: OCP 3.9 is a double rebase release. We literally had to go through the same release motions twice. Red Hat continues to influence the product in the areas of Storage, Networking, Resource Management, Authentication&Authorization, Multi-tenancy, Security, Service Deployments and templating, and Controller functionality.
Container OrchestrationRed Hat Contributing Projects:
● Job Failure Policy● Kubectl plugins● Pod level QoS● PV resizing● Mount namespace● CRD● CronJob● HPA Metrics● StorageClass ReclaimPolicy● Rules View API● RBAC● Mount Options● LIST queries● ClusterRole● Containerized Mounts● PV to Pod track and Delete● Raw Block Storage
OpenShift 3.9 Status of Kube 1.8 and 1.9 Upstream Features:https://docs.google.com/spreadsheets/d/1xdjfFVyoUaDgZXak4OHA90wq_bNIKrrc7U2xr8fKXEU/edit?usp=sharing
Feature(s): Feature tracking documentation
Description: My customer is having a difficult time knowing what support status a specific feature is in for a specific release of OpenShift.
Container Orchestration
How it Works: We have decided to add a table to the user guide to more clearly depict this information.
Feature(s): Device Plugins for Specialized Hardware
Description: People would like to set resource limits for hardware devices within their pod definition and have the scheduler find the node in the cluster with those resources. While at the same time, Kubernetes needed a way for hardware vendors to advertise their resources to the kubelet without forcing them to change core code within Kubernetes.
Device Manager
How it Works: The kubelet now houses a device manager that is extensible through plugins. You load the driver support at the node level. Then you or the vendor writes a plugin that listens for requests to stop/start/attach/assign/etc the requested hardware resources seen by the drivers. This plugin is deployed to all the nodes via a daemonSet.
TechPreview
kubelet device manager
Device Drivers
NVIDIAdaemonSet
Deep Learning Podresources: limits: nvidia.com/gpu: 3
Scheduler
(Hardware Vendor Provided)
(Hardware Vendor Provided)
Feature(s): “Soft” Image pruning
Description: Don’t remove actual image, just free update etcd storage
How it works:
● Safer to run --keep-tag-revisions and --keep-younger-than
● After this is run, admins can choose to run hard prune (which is safe to run as long as the registry is put in read only mode).
Registry
Additional registry work:
● Mirror manifests with image, to allow for pulling image when source image unavailable
● Move registry to separate registry - further agility● Investigate usage of fsck for corrupt image
reporting
Feature(s): Automated 3.7 to 3.9 control plane upgradeDescription: The installer automatically handles stepping the control plane from 3.7 to 3.8 to 3.9 and node upgrade from 3.7 to 3.9.
How it Works: 1. Control plane components [API, Controllers, Node (on control plane hosts)] are
upgraded seamlessly from 3.7 to 3.8 to 3.9a. Data migration happens pre and post 3.8 and 3.9 control plane upgrades
2. Other control plane components [Router, Registry, Service Catalog, Brokers] are upgraded from 3.7 to 3.9
3. Nodes [node, docker, ovs] are upgraded directly from 3.7 to 3.9 with only one drain of nodes
a. 3.7 nodes operate indefinitely against 3.8 masters should the upgrade process need to pause in this state
4. Logging and metrics are updated from 3.7 to 3.9
Notes: ● Recommended/preferable to upgrade control plane and nodes independently
● You can still perform the upgrade all in one playbook (but rollback is more difficult)
● Playbooks do not allow for a clean install of 3.8
InstallationPreparation:
1. Validate 3.7 storage migration the day before the upgrade:# oc adm migrate storage --include=* --loglevel=2
* If any errors search bugzilla or open a support case to remediate storage problems
2. Enable OCP 3.8 and 3.9 repos on all hosts# subscription-manager repos --disable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-server-ose-3.8-rpms" \ --enable="rhel-7-server-ose-3.9-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms"
3. Install 3.9 playbooks# yum upgrade openshift-ansible
Upgrade:
1. When Control Plane is upgraded independently of Nodes:# playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_control_plane.yml# playbooks/openshift-logging/config.yml# playbooks/openshift-metrics/config.yml
2. Assumes preparation steps of enabling repos has already happened and all-in-one upgrade.yml was not used.# playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.yml
Feature(s): Improved playbook performance
Description: Significant refactoring and restructuring of playbooks in 3.9 to improve performance.
How it Works:
● Restructured playbooks to push all fact gathering and common dependencies up into the initialization plays so they’re only called once rather than each time a role needs access to their computed values.
● Refactored playbooks to limit the hosts they touch to only those that are truly relevant to the playbook.
● As an example, prior to these changes upgrading the control plane in our large online environments spent >40 minutes gathering useless facts from 290 compute nodes that aren't relevant to the control plane upgrade.
● Initial results showed a large reduction in overall installation times; up to 30% faster in some cases
Installation
Feature(s): Quick installation [deprecated]
Description: Quick installation is being deprecated in 3.9 and will be removed in 3.10
How it Works: ● quick installation will only be capable of installing 3.9
● It will not be able to upgrade from 3.7 or 3.8 to 3.9. ● The `atomic-openshift-installer upgrade` function will exit with a message indicating updates are not supported under this
version of the quick installer● If an attempt to upgrade is made, reference the documentation explaining how to migrate from the existing quick
installer generated inventory to using openshift-ansible directly.
● openshift-ansible (advanced installation) will be the replacement for quick installation● Refer to the Installation and Configuration section of the OpenShift documentation.
● As part of the deprecation effort in 3.9:● Using an existing quick installer generated inventory to perform an upgrade from 3.7 to 3.9 will be documented● A localhost inventory will be provided that requires *zero* modification● Updated hosts.example will be provided so that everything that an admin would need to modify appears on the first
screen (masters, nodes, etcd group definition), making it clear that all other variables are optional
Installation
Feature(s): End to End Online Expansion (Resize) for CNS gluster-fs PV’s
Description: Users can expand their persistent volume claims online from OCP for CNS glusterFS volumes
• Can be done online from OCP
• Previously only available from Heketi CLI
• User edits PVC for the new size, triggering PV resize
• Fully Qualified for glusterFs backed PV’s
• Gluster-block PV resize will be added with RHEL 7.5
• Demo Video
StorageHow it Works/Example: • Add to storage class AllowVolumeExpansion=true
• oc edit pvc claim-name
• Edit the field ‘spec→ requests → storage: new value’
Feature(s): PV Resize
Description: Users can expand their persistent volume claims online from OCP for following storage backends:
● CNS glusterFS● gcePD ● cinder
Storage
How it Works:
- Create a storageclass with AllowVolumeExpansion=true- PVC uses the storageclass and submits a claim - Resize: PVC specifies a new increased size - Underlying PV is resized
Feature(s): CNS GlusterFS PV Consumption metrics available from OCP
Description: CNS GlusterFS extended to provide PV volume metrics (including consumption) through Prometheus or Query How it Works:
● Metrics available from PVC end point● User can now know PV size allocated as well as consumed
and use resize (Expand) of PV if needed from OCP● Example Metrics added ● kubelet_volume_stats_capacity_bytes● kubelet_volume_stats_inodes● kubelet_volume_stats_inodes_free● kubelet_volume_stats_inodes_used● kubelet_volume_stats_used_bytes ....etc
StoragePrometheus
‘curl’# TYPE kubelet_volume_stats_available_bytes gaugekubelet_volume_stats_available_bytes{namespace="default",persistentvolumeclaim="claim1"} 8.543010816e+09# TYPE kubelet_volume_stats_capacity_bytes gaugekubelet_volume_stats_capacity_bytes{namespace="default",persistentvolumeclaim="claim1"} 8.57735168e+09
Feature(s): CNS now supports Custom Volume Naming at backend
Description: OCP Users can specify custom volume names (prefixes) for PV’s from CNS backed storage class.
How it Works: ● Previously PV Names (vol_<UUID> , vol_1213456)● Specify new attribute in CNS storage class called
'volumenameprefix' ● CNS backend volumes will be named
myPrefix_NameSpace_PVCClaimName_UUID● Easy to recognize, users follow naming convention,● Easy to Search & Apply Policy based on prefix,
Namespace, Project Name, or Claim Name● Demo Video
Storage
PV Names: dept-dev_storageproject_claim1_12312321 VolumeNamPrefix_NameSpace_ClaimName_UUID
[root@localhost cluster]# cat ../demo/glusterfs-storageclass_fast.yaml apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: name: fastprovisioner: kubernetes.io/glusterfsparameters: resturl: "http://127.0.0.1:8081" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret"
volumenameprefix: "dept-dev"
Example
User supplied Prefix
Name SpaceProject Name
Claim Name
UUID
Feature(s): Automated Container Native Storage (CNS) deployment with OCP Advanced Installation
Description: In OCP Advanced Installer● Fixed CNS Block Provisioner deployment● Added CNS UnInstall Playbook
How it Works: ● CNS storage device details are added to the
installer’s inventory file ● The advanced installer manages configuration
and deployment of CNS, file & block provisioners, registry and ready to use PV
Storage
o OCP + CNS deployed as one clustero CNS with Block & File provisioners deployedo OCP Registry deployed on CNSo Ready to deploy Logging, Metrics on CNS
OPENSHIFT NODE 2 OPENSHIFT NODE 3
OPENSHIFT NODE 1
OPENSHIFT NODE 4
MASTERAPP Container APP Container
APP CONTAINER
RHGS Container
RHGS Container RHGS Container
Feature(s):
syslog output plugin for fluentd Note: blocker bug will be delivered in 3.9.z; so GA will happen in conjunction with that
Description:
Users would like to send logs (system and container) from OCP nodes to external endpoints using the syslog protocol. The fluentd syslog output plugin supports that.
Limitations: logs sent via syslog are not encrypted and therefore insecure
Logging
How it Works:
OpenShift Ansible Installer for Logging openshift_logging_fluentd_remote_syslog = true
openshift_logging_fluentd_remote_syslog_host = <hostname> or <IP>
openshift_logging_fluentd_remote_syslog_port = <port no, defaults to 514>
openshift_logging_fluentd_remote_syslog_severity = <severity level, defaults to debug>
TechPreview
Feature(s):
● Prometheus stays in (Tech Preview) ● Prometheus, AlertManager and AlertBuffer
versions are updated● node_exporter included ● Note: Hawkular is still the supported
Metrics stack
Description:
OpenShift Operators deploy Prometheus on an OCP cluster, collect Kubernetes and Infrastructure metrics, get alerts. Operators can see and query metrics and alerts on Prometheus web dashboard. Or They can bring their own Grafana and hook it up to Prometheus.
Metrics
How it Works: ● New OpenShift installer playbook for
installing Prometheus server, alert manager and oAuth-proxy
● Deploys Statefulset comprising server, alert-manager, buffer and oAuthProxy in front and a PVC one for server and one for alert manager
● Alerts can be created in a rule file and selected via inventory file
TechPreview
44
● OpenShift Template Provisioning
● Off-line OpenScap Scans
● Alert Management (Prometheus) - Tech Preview
● Reporting Updates
● Provider Updates
● Chargeback Enhancements
● UX Enhancements
CFME 4.6 Container Mgmt
CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER
Trusted Container OSContainers Depend on Linux
ATOMIC HOST /RED HAT ENTERPRISE LINUX
OCI CONTAINER RUNTIME & PACKAGING
Storage
● Virtual data optimizer (VDO) for dm-level dedupe and compression.
● OverlayFS by default for new installs (overlay2)○ Ensure ftype=1 for 7.3 and earlier
● Devicemapper continues to be supported and available for edge cases around POSIX
● LVM snapshots integrated with boot loader (boom)
RHEL 7.5 Highlights
OpenShift Container Platform 3.9 is supported on RHEL 7.3, 7.4, 7.5 and Atomic Host 7.4.5+.
Containers / Atomic
● Docker 1.13● Docker-latest deprecation● RPM-OSTree package overrides
Security
● Unprivileged mount namespace● KASLR full support and enabled by default. ● Ansible remediation for OpenSCAP● Improved SELinux labeling for cgroups
(cgroup_seclabel)
CRI-O v1.9
Feature(s): CRI-O v1.9 - Will GA OpenShift 3.9.z
Description: CRI-O is an OCI compliant implementation of the Kubernetes Container Runtime Interface. By design it provides only the runtime capabilities needed by the kubelet. CRI-O is designed to be part of Kubernetes and evolve in lock-step with the platform.
CRI-O brings:
● A minimal and secure architecture● Excellent scale and performance● Ability to run any OCI / Docker image● Familiar operational tooling and commands
Improvements include:
● New CLI (podman) shipping in 7.5.z● Image volume handling ● Registry listings● Pids cgroups controls● SELinux support
KubeletStorage Image
RunCCNI Networking
TechPreview
Feature: Buildah moving to full support with RHEL 7.5
Description: Buildah is a daemon-less tool for building and modifying OCI / Docker images.
● Preserves existing Dockerfile workflow and instructions
● Allows fine-grain control over image layers, the content, and commits
● Utilities on the container host can optionally be called for the build.
● Shares the underlying image and storage components with CRI-O
Buildah
Generate new layers and/or run commands
on existing layers
Start from an existing image or from scratch
Commit storage and generate the image
manifest
Deliver image to a local store or remote OCI / docker
registry