Definitive guide to setup KrakenD on GKE

Vijay Savanth
8 min readJan 2, 2021

--

Overview

One of the first things I usually do after spinning up a GKE cluster is to secure HTTP traffic to backends by setting up an API Gateway called KrakenD. In addition to security, we can use KrakenD to route traffic to different namespaces since GKE’s Ingress doesn’t allow for routing across namespaces at this stage.

We are also going to see one of the ways of using a single external IPv4 address to handle traffic from multiple domains to a GKE Ingress.

Non-existent domains of some.domain.dev and some.domain.io will be used throughout this guide. Please replace these with real domains you own.

Routing traffic to multiple backends via a single ingress

Prerequisites

  • Access to GCP.
  • You own a few domains that you can use.
  • Your local machine has glcoud command line tool installed.
  • You are already familiar with KrakenD and it’s configuration file.
  • You have some experience with Kubernetes.

Step 1 — Create an external IP address (~ 1 min)

Run the following command to create a global external IP address.

gcloud compute addresses create my-global-address --global

Retrieve the IPv4 address assigned using the following command.

gcloud compute addresses describe my-global-address --global

Step 2 — Setup domain (~ 5 mins + wait for DNS propagation)

Setup A and CNAME records with you domain provider.

Example:

# some.domain.dev@               A           1h      <IP_ADDRESS_FROM_PREVIOUS_STEP>some            CNAME       1h      domain.dev--------------------------------------------------------------------# some.domain.io@               A           1h      <IP_ADDRESS_FROM_PREVIOUS_STEP>some            CNAME       1h      domain.io

Now we need to wait for the changes to propagate. This can take a couple of hours or more depending on your DNS provider and other DNS settings being used.

You can verify propagation using the dig command.

dig some.domain.devdig some.domain.io

Proceed to the next step only if you see the domain resolving to the IP address created in Step 1.

Step 3— Setup a GCP Project and spin up a GKE cluster (~ 5 mins)

This is a relatively straightforward step and can be achieved either via the GCP user interface or the glcoud command line tool.

After the project and cluster have been setup, make sure you run gcloud init to connect your local machine to your project.

Take note of your GCP PROJECT ID and the GKE cluster name

Step 4 — Install a service mesh called LinkerD (~5 mins)

LinkerD is a light weight, easy to use and easy to install service mesh. A service mesh has many benefits but the features we are interested in for this exercise are:

  • Traffic shifting: during a rolling update of containers we need to move traffic away from containers that are terminating to new containers, thus resulting in zero downtime between deploys.
  • Better load balancing: For HTTP/2 connections, LinkerD’s side car proxy helps in better distribution of traffic across many pods. You can read more about this here.

Follow the LinkerD setup guide to install LinkerD in your GKE cluster.

Step 5— Setup a k8s Namespace + LimitRange (~ 3 mins)

Create a file called gke-ingress/namespace.yaml with the following contents:

Organising your work in Namespaces is good practice.

Create a file called gke-ingress/limitrange.yaml with the following contents:

Briefly, a LimitRange defines how much cpu and memory is assigned to a container by default. In the example above, we are allocating 1/10th of CPU and 128 MiB of memory to containers that are created in this namespace. You can learn more about LimitRanges here.

Then run the following commands:

gcloud container clusters get-credentials <YOUR_GKE_CLUSTER_NAME> --project <YOUR_GCP_PROJECT_ID>cd gke-ingresskubectl apply -f namespace.yamlkubectl apply -f limitrange.yaml -n gke-ingress

Step 6— Create a KrakenD configuration for your domains (~5 mins)

Create a file called gke-ingress/krakend-some-domain-dev/krakend.json with the following contents:

The above config is for some.domain.dev and will have KrakenD running on port 5000. We are also creating a route to KrakenD’s health check. This is needed for the Ingress health check to pass. More on this in Step 10.

Create another file called gke-ingress/krakend-some-domain-io/krakend.json with the following contents:

The above config is for some.domain.io and will have KrakenD running on port 5005. We are also creating a route to KrakenD’s health check. This is needed for the Ingress health check to pass. More on this in Step 10.

Step 7 — Build a KrakenD container for your domains (~5 mins)

Create 2 files:

  • gke-ingress/krakend-some-domain-dev/Dockerfile
  • gke-ingress/krakend-some-domain-io/Dockerfile

with the following contents:

Then build containers and push to Google container registry (replace <YOUR-GCP-PROJECT-ID> below with a real project id).

gcloud auth configure-docker gcr.iocd gke-ingress/krakend-some-domain-devdocker build -f Dockerfile -t gcr.io/<YOUR-GCP-PROJECT-ID>/gke-ingress-krakend-some-domain-dev:v1 .docker push gcr.io/<YOUR-GCP-PROJECT-ID>/gke-ingress-krakend-some-domain-dev:v1cd ../krakend-some-domain-iodocker build -f Dockerfile -t gcr.io/<YOUR-GCP-PROJECT-ID>/gke-ingress-krakend-some-domain-io:v1 .docker push gcr.io/<YOUR-GCP-PROJECT-ID>/gke-ingress-krakend-some-domain-io

Step 8 — Deploy your KrakenD containers (~5 mins)

Create a filed called gke-ingress/krakend-some-domain-dev/k8s.yaml with the following contents (replace <YOUR-GCP-PROJECT-ID> below with a real project id):

The above k8s config is for domain some.domain.dev and setups the following:

  • A deployment with 2 pods that are meshed by LinkerD and has KrakenD running on port 5000.
  • A service to accept traffic on port 5000.
  • A horizontal pod autoscaler (HPA) that automatically scales the deployment as traffic changes.

Deploy as follows:

cd gke-ingresskubectl apply -f krakend-some-domain-dev/k8s.yaml -n gke-ingress

Check if the pods are running:

kubectl get pods -n gke-ingress

Create a file called gke-ingress/krakend-some-domain-io/k8s.yaml with the following contents (replace <YOUR-GCP-PROJECT-ID> below with a real project id):

The above k8s config is for domain some.domain.io and setups the following:

  • A deployment with 2 pods that are meshed by LinkerD and has KrakenD running on port 5005.
  • A service to accept traffic on port 5005.
  • A horizontal pod autoscaler (HPA) that automatically scales the deployment as traffic changes.

Deploy as follows:

cd gke-ingresskubectl apply -f krakend-some-domain-io/k8s.yaml -n gke-ingress

Check if the pods are running:

kubectl get pods -n gke-ingress

Proceed to the next step only if all pods are in the running state.

Step 9 — Create HTTPS certificates for your domains (~3 mins)

Create a file called gke-ingress/some-domain-dev-cert.yaml with the following contents:

Create another file called gke-ingress/some-domain-io-cert.yaml with the following contents:

Then apply as follows:

cd gke-ingresskubectl apply -f some-domain-dev-cert.yaml -n gke-ingresskubectl apply -f some-domain-dev-io.yaml -n gke-ingress

Move to Step 10 as fast as possible.

Step 10 — Create Ingress (~ 5 mins + ~30+ mins of waiting)

Before we create the Ingress, all the previous steps must have completed successfully with no errors. Steps 2 and 8 are critical for the Ingress creation.

Create a file called gke-ingress/ingress.yaml with the following contents:

Then apply as follows:

cd gke-ingresskubectl apply -f ingress.yaml -n gke-ingress

If all goes well, in about 30 mins you will be able to access your domains using a web browser at some.domain.dev and some.domain.io and you should see the message from KrakenD’s health check:

{"message":"pong"}

Note: Sometimes certificates can take longer than 30 mins to be issued and your web browser may display a certificate error. Use the following command to track status of your certificates.

kubectl get managedcertificates -n gke-ingress

There is a reason for doing things in this order. For an Ingress to begin routing traffic, each backend in the ingress.yaml file needs to respond with a HTTP 200 . However the ingress cannot be setup before the ManagedCertificate objects defined in the *cert.yaml files are created. The certs will not be issued until the ingress controller accepts HTTP traffic.

So how can we overcome this circular dependency? Well, managed certificates are not issued instantly after the kubectl apply command. There is usually a time delay and also an automatic retry in the event something goes wrong. Same goes for the ingress creation. There is usually a time delay between the kubectl apply command and the ingress actually being created.

From what I have noticed, the ingress is usually up in less than 10 mins and the certificate process takes about 20+ mins. In a happy path, this is what happens:

  • Ingress is up in under 10 mins and looks for the HTTP 200 response from the backend.
  • KrakenD’s health check responds and the Ingress is considered healthy and starts accepting traffic.
  • Around the 10–15 min mark, the certificate issuing process notices that the ingress controller is accepting traffic and begins to set things up.

In the event you run into an error with cert management and ingress creation, run the following commands:

# Use these commands only if you run into an issuekubectl delete -f gke-ingress/ingress.yaml -n gke-ingresskubectl delete -f gke-ingress/some-domain-dev-cert.yaml -n gke-ingresskubectl delete -f gke-ingress/some-domain-dev-io.yaml -n gke-ingress# Wait about 1 minkubectl apply -f gke-ingress/some-domain-dev-cert.yaml -n gke-ingresskubectl apply -f gke-ingress/some-domain-dev-io.yaml -n gke-ingresskubectl apply -f gke-ingress/ingress.yaml -n gke-ingress

Step 11 — Create a sample backed in another namespace (~ 5 mins)

First create a namespace.

kubectl create ns hello

Let’s deploy a simple, secure and light weight Golang HTTP server that prints Hello!.

Create a file called hello/hello.yaml with the following contents:

Apply as follows:

kubectl apply -f hello/hello.yaml -n hello

Step 12 — Update the krakend.json file to create a route to the new backend (~3 mins)

Modify the following files created in Step 6:

  • gke-ingress/krakend-some-domain-dev/krakend.json
  • gke-ingress/krakend-some-domain-io/krakend.json

Add the following JSON to the endpoints array:

To route traffic to another namespace, we simply use <servicename>.<namespace>:<serviceport> syntax in the host value.

Step 13 — Build and deploy KrakenD with the new config (~5 mins)

Repeat Step 7 and Step 8 replacing the image value of v1 with v2

After the deployment visit some.domain.dev/hello and some.domain.io/hello and you will be greeted with the following message:

Hello!

Tear Down (~ 5 mins)

Make sure you release resources to save costs.

kubectl delete ns hellokubectl delete ns gke-ingressgcloud compute addresses delete my-global-address --global

Finally, delete the GKE cluster.

Summary

  • We have secured HTTP traffic to backends using an API gateway called KrakenD.
  • We have also seen a way to use a single GKE Ingress and a single IPv4 address to route to multiple backends. Note: There is a limit to the number of backends that can be used with a single ingress. You can read more about it here.
  • We have generated HTTPS certs for our domains.
  • We have routed traffic across Kubernetes namespaces.
  • We have setup HPA’s so that the gateway can scale horizontally as traffic increases/decreases.

Feedback

Thank you for taking the time to go through this guide. Let me know what you liked or if there is anything that needs to be improved or if there is a better way to setup KrakenD on GKE or anything else in general.

Let me know in the comments below if you run into issues with any of the commands above and I will try my best to get back to you.

--

--

Vijay Savanth
Vijay Savanth

Written by Vijay Savanth

Lead Engineer. Based in Tāmaki Makaurau, Aotearoa (Auckland, New Zealand).

No responses yet