Skip to content

Getting started

HyperShift is middleware for hosting OpenShift control planes at scale that solves for cost and time to provision, as well as portability cross cloud with strong separation of concerns between management and workloads. Clusters are fully compliant OpenShift Container Platform (OCP) clusters and are compatible with standard OCP and Kubernetes toolchains.

This guide will lead you through the process of creating a new hosted cluster. Throughout the instructions, shell variables are used to indicate values that you should adjust to your own environment.

Prerequisites

  • The HyperShift CLI (hypershift).

    Install it using Go 1.17+:

    go get -u github.com/openshift/hypershift@latest
    

  • Admin access to an OpenShift cluster (version 4.8+) specified by the KUBECONFIG environment variable.

  • The OpenShift CLI (oc) or Kubernetes CLI (kubectl).
  • A valid pull secret file for the quay.io/openshift-release-dev repository.
  • An AWS credentials file with permissions to create infrastructure for the cluster.
  • A Route53 public zone for cluster DNS records.

    To create a public zone:

    1
    2
    DOMAIN=www.example.com
    aws route53 create-hosted-zone --name $DOMAIN --caller-reference $(whoami)-$(date --rfc-3339=date)
    

    Important

    In order to access applications in your guest clusters, the public zone must be routable.

  • An S3 bucket with public access to host OIDC discovery documents for your clusters.

    To create the bucket (in us-east-1):

    1
    2
    BUCKET_NAME=your-bucket-name
    aws s3api create-bucket --acl public-read --bucket $BUCKET_NAME
    

    To create the bucket in a region other than us-east-1:

    1
    2
    3
    4
    5
    BUCKET_NAME=your-bucket-name
    REGION=us-east-2
    aws s3api create-bucket --acl public-read --bucket $BUCKET_NAME \
      --create-bucket-configuration LocationConstraint=$REGION \
      --region $REGION
    

Before you begin

Install HyperShift into the management cluster, specifying the OIDC bucket, its region and credentials to access it (see Prerequisites):

1
2
3
4
5
6
7
8
REGION=us-east-1
BUCKET_NAME=your-bucket-name
AWS_CREDS="$HOME/.aws/credentials"

hypershift install \
--oidc-storage-provider-s3-bucket-name $BUCKET_NAME \
--oidc-storage-provider-s3-credentials $AWS_CREDS \
--oidc-storage-provider-s3-region $REGION

Create a HostedCluster

Create a new cluster, specifying the domain of the public zone provided in the Prerequisites:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
REGION=us-east-1
CLUSTER_NAME=example
BASE_DOMAIN=example.com
AWS_CREDS="$HOME/.aws/credentials"
PULL_SECRET="$HOME/pull-secret"

hypershift create cluster aws \
--name $CLUSTER_NAME \
--node-pool-replicas=3 \
--base-domain $BASE_DOMAIN \
--pull-secret $PULL_SECRET \
--aws-creds $AWS_CREDS \
--region $REGION

Important

The cluster name (--name) must be unique within the base domain to avoid unexpected and conflicting cluster management behavior.

Note

A default NodePool will be created for the cluster with 3 replicas per the --node-pool-replicas flag.

After a few minutes, check the hostedclusters resources in the clusters namespace and when ready it will look similar to the following:

oc get --namespace clusters hostedclusters
NAME      VERSION   KUBECONFIG                 AVAILABLE
example   4.8.0     example-admin-kubeconfig   True

Eventually the cluster's kubeconfig will become available and can be printed to standard out using the hypershift CLI:

hypershift create kubeconfig

Add NodePools

Create additional NodePools for a cluster by specifying a name, number of replicas and additional information such as instance type.

Create a NodePool:

1
2
3
4
5
6
7
8
9
NODEPOOL_NAME=${CLUSTER_NAME}-work
INSTANCE_TYPE=m5.2xlarge
NODEPOOL_REPLICAS=2

hypershift create nodepool aws \
  --cluster-name $CLUSTER_NAME \
  --name $NODEPOOL_NAME \
  --node-count $NODEPOOL_REPLICAS \
  --instance-type $INSTANCE_TYPE

Important

The default infrastructure created for the cluster during Create a HostedCluster lives in a single availability zone. Any additional NodePool created for that cluster must be in the same availability zone and subnet.

Check the status of the NodePool by listing nodepool resources in the clusters namespace:

oc get nodepools --namespace clusters

Scale a NodePool

Manually scale a NodePool using the oc scale command:

1
2
3
4
NODEPOOL_NAME=${CLUSTER_NAME}-work
NODEPOOL_REPLICAS=5

oc scale nodepool/$NODEPOOL_NAME --namespace clusters --replicas=$NODEPOOL_REPLICAS

Delete a HostedCluster

To delete a HostedCluster:

hypershift destroy cluster aws --name $CLUSTER_NAME --aws-creds $AWS_CREDS