Red Hat® OpenShift Container Platform 3 is built around a core of application containers, with orchestration and management provided by Kubernetes, on a foundation of Atomic Host and Red Hat Enterprise Linux. OpenShift Origin is the upstream community project that brings it all together along with extensions, to accelerate application development and deployment.
This reference environment provides a comprehensive example demonstrating how OpenShift Container Platform 3 can be set up to take advantage of the native high availability capabilities of Kubernetes and Amazon Web Services in order to create a highly available OpenShift Container Platform 3 environment. The configuration consists of three OpenShift Container Platform masters, three OpenShift Container Platform infrastructure nodes, two OpenShift Container Platform application nodes, and native Amazon Web Services integration. In addition to the configuration, operational management tasks are shown to demonstrate functionality.
This reference architecture breaks down the deployment into separate phases.
- Phase 1: Provision the infrastructure on AWS
- Phase 2: Provision OpenShift Container Platform on AWS
- Phase 3: Post deployment activities
For Phase 1, the provisioning of the environment is done using a series of Ansible playbooks that are provided in the openshift-ansible- contribgithub repo. Once the infrastructure is deployed the playbooks will flow automatically into Phase 2. Phase 2 is the installation of OpenShift Container Platform which is done via Ansible playbooks. These playbooks are installed by the openshift-ansible- playbooks rpm package. The playbooks in openshift-ansible- contrib utilize the playbooks provided by the openshift-ansible- playbooks RPM package to perform the installation of OpenShiftand also to configure AWS specific parameters. During Phase 2 the router and registry are deployed. The last phase, Phase 3, concludes the deployment by confirming the environment was deployed properly. This is done by running tools like oadm diagnostics and the systems engineering teamsvalidation Ansible playbook.
The scripts provided in the github repo are not supported by Red Hat. They merely provide a mechanism that can be used to build out your own infrastructure.
Elastic Compute Cloud Instance Details
Within this reference environment, the instances are deployed in multiple availability zones in the us-east- 1 region by default. Although the default region can be changed, the reference architecture deployment can only be used in Regions with three or more availability zones. The master instances for the OpenShift environment are m4.xlarge and contain three extra disks used for Docker storage, OpenShift ephemeral volumes, and ETCD. The node instances are t2.large and contain two extra disks used for Docker storage and OpenShift ephemeral volumes. The bastion host is a t2.micro. Instance sizing can be changed in the variable files for each installer which is covered in later chapters.
Elastic Load Balancers Details
Three load balancers are used in the reference environment. The table below describes the load balancer DNS name, the instances in which the ELB is attached, and the port monitored by the load balancer to state whether an instance is in or out of service.
Elastic Load Balancers
|*.apps.sysdeseng.com||infra-nodes01- 3||80 and 443|
Both the internal-openshift- master, and the openshift-master ELB utilize the OpenShift Master API port for communication. The internal-openshift- master ELB uses the private subnets for internal cluster communication with the API in order to be more secure. The openshift-master ELB is used for externally accessing the OpenShift environment through the API or the web interface. The openshift-master ELB uses the public subnets to allow communication from anywhere over port 443. The *.appsELB uses the public subnets and maps to infrastructure nodes. The infrastructure nodes run the router pod which then directs traffic directly from the outside world into OpenShift pods with external routes defined.
Software Version Details
The following tables provide the installed software versions for the different servers that make up the Red Hat OpenShift highly available reference environment.
RHEL OSEv3 Details
|Red Hat Enterprise Linux 7.3 x86_64||kernel-3.10.0.x|
A subscription to the following channels is required in order to deploy this reference environment’s configuration.
Required Channels – OSEv3 Master and Node Instances
|Red Hat Enterprise Linux 7 Server (RPMs)||rhel-7- server-rpms|
|Red Hat OpenShift Container Platform 3.5 (RPMs)||rhel-7- server-ose- 3.5-rpms|
|Red Hat Enterprise Linux 7 Server – Extras (RPMs)||rhel-7- server-extras- rpms|
|Red Hat Enterprise Linux Fast Datapath (RHEL 7 Server) (RPMs)||rhel-7- fast-datapath- rpms|
AWS Region Requirements
The reference architecture environment must be deployed in a Region containing at least 3 availability zones and have 2 free elastic IPs. The environment requires 3 public and 3 private subnets. The usage of 3 public and 3 private subnets allows for the OpenShift deployment to be highly-available and only exposes the required components externally. The subnets can be created during the installation of the reference architecture environment deployment.
Permissions for Amazon Web Services
The deployment of OpenShift requires a user that has the proper permissions by the AWS IAM administrator. The user must be able to create accounts, S3 buckets, roles, policies, Route53 entries, and deploy ELBs and EC2 instances. It is helpful to have delete permissions in order to be able to redeploy the environment while testing.
Virtual Private Cloud (VPC)
An AWS VPC provides the ability to set up custom virtual networking which includes subnets, IP address ranges, route tables and gateways. In this reference implementation guide, a dedicated VPC is created with all its accompanying services to provide a stable network for the OpenShift v3 deployment.