Using an Ingress Controller we’re going to secure traffic to our Kubernetes cluster with TLS and a Let’s Encrypt certificate.

To follow these instructions you will need:

  • A Kubernetes cluster you can manage; recommended 3 nodes.
  • A domain name that you can create DNS A records for
  • Although this tutorial was tested with GKE 1.13.6-gke.13 the same approach can be taken on other types of Kubernetes clusters.

Most Kubernetes deployment examples show you how to deploy a container and then expose the container to public traffic via the service, although sometimes you might like to have more control over ingress to your cluster services.

Kubernetes Ingress was added in Kubernetes v1.1 and exposes HTTP and HTTPS routes from outside the cluster to services running within the cluster. Traffic routing is controlled by rules defined on the ingress resource which we’ll see later.

To run kubectl you’ll need to have access to your cluster. For GKE the command gcloud container clusters get-credentials will do this. To ensure you’re actually using the correct cluster, you can run kubectl config current-context

If you have a few clusters you can list all of the contexts using kubectl config get-contexts and then set a different context using kubectl config set-context $NAME where $NAME is one of the entries in the NAME column when running ...get-contexts. It’s always good to double check which cluster you’re going to make changes to.

Setup

We’re going to use Helm to install tooling such as Cert Manager. The Helm documentation covers installation. We need to set up a Service Account and Role Bindings for Kubernetes’ Role-Based Access Control.

Create a file called tiller-rbac-config.yaml containing:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
--- 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata: 
  name: tiller
roleRef: 
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects: 
  - 
    kind: ServiceAccount
    name: tiller
    namespace: kube-system

Create these resources by running:

kubectl create -f tiller-rbac-config.yaml

Then initialise the tiller service account by running:

helm init --service-account tiller

Service

Next up we need to schedule some work to the cluster so for this we’re going to use a Hello, World app.

Create a file called deployment.yaml containing:

--- 
apiVersion: apps/v1
kind: Deployment
metadata: 
  labels: 
    app: hello
  name: hello
  namespace: default
spec: 
  replicas: 1
  selector: 
    matchLabels: 
      app: hello
  template: 
    metadata: 
      labels: 
        app: hello
    spec: 
      containers: 
        - 
          image: "betandr/gollo-world:latest"
          livenessProbe: 
            httpGet: 
              path: /
              port: 8888
          name: hello
          ports: 
            - 
              containerPort: 8888
          readinessProbe: 
            httpGet: 
              path: /
              port: 8888

Then schedule this by running:

kubectl create -f deployment.yaml

You can check on the deployment’s progress using:

kubectl get deployment hello

This should show your deployment is available, such as:

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello     1         1         1            1           37s

Create a service for this workload by creating a file called service.yaml containing:

--- 
apiVersion: v1
kind: Service
metadata: 
  name: hello
  namespace: default
spec: 
  ports: 
    - 
      name: http
      port: 80
      protocol: TCP
      targetPort: 8888
  selector: 
    app: hello
  type: NodePort

Note that while the service does run on port 80, this service does not have public ingress into the cluster. The hello service is only accessible from within the server. Create this service using:

kubectl create -f service.yaml

You can check on the service’s progress using:

kubectl get service hello

Your service should be assigned an internal IP address, but not a public IP address as we don’t wish this service to have public ingress. Instead we’re going to route all of our traffic to this service via an ingress controller.

Ingress

We should now have a service running on port 80 serving traffic within the cluster to our container running in a pod on port 8888. Now we can create our Ingress Controller.

Because we wish to use HTTPS for our service we need to use a certificate from Let’s Encrypt. Cert Manager will obtain this certificate for us but initially we will need to create the Ingress Controller without the references to the certificates. This is because you need to create the ingress first to obtain the IP address which is used when creating the DNS A-record (we will see this later); all before we can get the certificate.

Create a file called ingress.yaml containing:

--- 
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: demo-ingress
spec: 
  rules: 
    - 
      host: YOUR_DOMAIN_NAME_HERE
      http: 
        paths: 
          - 
            backend: 
              serviceName: hello
              servicePort: 80

Replace YOUR_DOMAIN_NAME_HERE with your own domain name, in a format such as example.com This manifest sets up a rule which matches requests from your host (as an HTTP host header field) to a particular service—in this instance the hello service running on port 80.

At this stage we are currently not using TLS. We want to initially create this ingress without TLS until we have cert-manager running and a demo-tls secret available. Otherwise we’ll have no ingress at this stage. It’s a little bit horse and cart but we’ll add TLS later.

To create the ingress controller run:

kubectl create -f ingress.yaml

Obtain the external IP address of the ingress:

kubectl get ing demo-ingress

It may take some time to assign an external IP—as GCP is provisioning you a load-balancer—but when one is assigned, visit your domain management console and create an A record with the new IP. This will mean that a DNS lookup of your domain name will yield the IP address of your ingress controller so requests to your domain name will be forwarded to your cluster’s Ingress Controller.

This all takes some time; to get an external IP, to propagate the DNS A record, and then for everything to start responding. Visiting your domain will likely return a Google 404 for a while as it takes some time to provision the load-balancers and is normal.

While this is happening you can watch cluster events using:

kubectl get events --sort-by=.metadata.creationTimestamp

…and to check the progress of your DNS record you can use dig.

When, eventually, everything is working, you should see a response of hello, world! when requesting http://YOUR_DOMAIN_NAME/. Note we use http at the moment. Next we’ll set up https

Install and Configure Certificate Manager

We will use Jetstack’s cert-manager to manage certificates, providing us with TLS.

Install cert-manager using the command:

helm install stable/cert-manager \
  --name cert-manager \
  --namespace kube-system \
  --version v0.5.2

Next we’re going to create a ClusterIssuer which will provide the ability to obtain certificates from a central authority; Letsencrypt in our case. A cluster-wide issuer like this may not be the correct option for a multi-tenanted cluster but for our use it’s fine. Something to keep in mind at least. YMMV.

Create a file called issuer.yaml with the contents:

--- 
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata: 
  name: letsencrypt-issuer
spec: 
  acme: 
    email: ADD_YOUR_EMAIL@ADDRESS.com
    http01: {}
    privateKeySecretRef: 
      name: letsencrypt
    server: "https://acme-v02.api.letsencrypt.org/directory"

Create the issuer by running:

kubectl apply -f issuer.yaml

To watch the status of the issuer, run:

kubectl describe ClusterIssuer letsencrypt-issuer

…which will show you the issuer’s progress in obtaining a certificate.

Next create the cluster’s Certificate which will hold the certificate that Let’s Encrypt has issued us.

Create a file called certificate.yaml containing:

--- 
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata: 
  name: demo-certificate
  namespace: default
spec: 
  acme: 
    config: 
      - 
        domains: 
          - YOUR_DOMAIN_NAME_HERE
        http01: 
          ingress: demo-ingress
  commonName: YOUR_DOMAIN_NAME_HERE
  dnsNames: 
    - YOUR_DOMAIN_NAME_HERE
  issuerRef: 
    kind: ClusterIssuer
    name: letsencrypt-issuer
  secretName: demo-tls

Create the certificate resource with:

kubectl create -f certificate.yaml

To check the progress of your certificate, run:

kubectl describe certificate demo-certificate

You may see some messages such as http-01 self check failed for domain or ValidateError but these are likely to be transitory and things should begin to settle down when you enable TLS through the Ingress Controller.

Previously we created the Ingress resource with some lines commented out. Now we should have our certificate issued so can enable TLS via the ingress controller.

Edit ingress.yaml to contain the following:

--- 
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  annotations: 
    certmanager.k8s.io/acme-http01-edit-in-place: "true"
    certmanager.k8s.io/cluster-issuer: letsencrypt-issuer
  name: demo-ingress
spec: 
  rules: 
    - 
      host: faworth.co.uk
      http: 
        paths: 
          - 
            backend: 
              serviceName: hello
              servicePort: 80
  tls: 
    - 
      secretName: demo-tls

Then update the ingress controller using (note apply rather than create ):

kubectl apply -f ingress.yaml

Now running:

kubectl get ing demo-ingress

…you should see 80, 443 in the PORTS column as we now have TLS over port 433 enabled.

After some time your domain should start responding using TLS: https://YOUR_DOMAIN_NAME_HERE/ (note https rather than http now). However this could take quite a while. At first you may see nothing, then 404s from Google, then eventually your site should be responding.

Now we have our service (hello, world) accessible via TLS with a Let’s Encrypt certificate!