Serverless on Kubernetes with Knative

By Dumitru Lupanciuc

As a result, cloud infrastructure and services help and encourage developers to transition from monolith apps and to implement microservice architectures that result in more flexibility, independent development and deployment, separation of concerns, and horizontal scalability.

Serverless products such as Google Cloud Functions, Amazon Web Services (AWS) Lambda Functions, or Azure Functions can be used to leverage microservice architectures. Developers can then solely focus on the code as they can rely on the cloud providers to package and run the apps. These services run function code in response to events, scale based on current demand, and are charged per invocations.

Kubernetes, on the other hand, is a container orchestration tool that manages and deploys container-based applications across multiple cloud providers; more information on Kubernetes can be found in this blog written by my colleague, Leire. However, Kubernetes still requires developers to create container images to deploy their apps. In an effort to solve this requirement, and to add another level of abstraction and to bring a “serverless experience” that would help create and deploy containers automatically, and directly from the source code, Google announced Knative. Knative is an open-source project and is a result of a joint effort between over 50 companies. Knative represents an extension of Kubernetes and focuses on three key aspects:

  • Building your application: A flexible, pluggable build system that goes from source to container. It already has support for several build systems which can build container images on your Kubernetes cluster without the need for a running Docker daemon.
  • Serving traffic to it: Automatically scales based on load, including scaling to zero when there is no load. Allows you to create traffic policies for multiple revisions, enabling easy routing to applications via URL.
  • Enabling applications to easily consume and produce events: It makes it easy to produce and consume events, abstracts away from event sources, and allows operators to run any messaging layer.
  • Knative is installed as a set of Custom Resource Definitions (CRDs) for Kubernetes which means that Knative can be installed on any cloud provider or environment where Kubernetes can be deployed. However, below we will use Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) to install Knative and deploy our first application. In the Google Cloud Console, you will need to create a project and enable billing if you haven’t done so yet. At this point, we will assume that it is already done and will start by creating the cluster using cloud shell.

    First, we will enable the necessary cloud APIs by running the following command:


    Next, we will create a Kubernetes cluster where we will install Knative with the following parameters:

  • Zone “us-east1-c”
  • Kubernetes version 1.11 or later
  • 2 vCPU nodes (n1-standard-2)
  • Node autoscaling, up to 10 nodes
  • API scopes for cloud-platform, logging-write, monitoring-write, and pubsub.
  • Running the above command may take several minutes to create the cluster


    After several minutes, the Kubernetes with the parameters we specified above will be created.

    Next, we will have to grant admin controls to the current user since it is necessary to install Istio in the next step.


    Knative depends on Istio which is a service mesh that lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices. Knative can work with Gloo as well, however, it is not currently compatible with the Knative Eventing component. To install Istio, it is necessary to run the following command.


    2. Monitor the Istio components until all of the components show a STATUS of Running or Completed:

    kubectl get pods — namespace istio-system

    It will take a few minutes for all the components to be up and running;

    Now, we are ready to install all the Knative components, the following command will do the work:


    To verify whether all Knative services are running, we can run the following command:


    As a result, the status for all the services should be “running.”

    Now we have Knative installed on our Kubernetes cluster and we are ready to install our first app. The deployment of the Knative app is done through the .yaml configuration file that defines a service. Below, service config specifies the container image of Hello World sample app in Go. It is possible to specify any other container image available on any container registry.

    Create a new file named service.yaml, then copy and paste the following content into it:

    Upload the service.yaml file into the cloud shell and run the following command:


    Once the app is deployed it will let you know that the app is created.

    Cleaning up

    Google Kubernetes Engine uses Google’s Compute Engine instances for nodes in the cluster. You will be billed for each of those instances according to Compute Engine’s pricing until the nodes are deleted. Unlike Google Cloud Functions that are billed by invocation, Compute Engine resources are billed on a per-second basis with a 1-minute minimum usage cost.

    As a result, it is a good idea to delete the cluster when you’re done if you will no longer be using it. Deleting the cluster will also remove Knative, Istio, and any apps you’ve deployed.

    To delete the cluster, enter the following command:

    gcloud container clusters delete knative — zone us-east1-c


    We created a Kubernetes cluster, installed Knative components, and deployed our first application. Knative offers multiple build templates for deploying containerized applications, some of them being kaniko, Jib, and Buildpacks.

    As a result, Knative is another step in bringing containerized applications to a truly serverless experience. Knative looks to build on Kubernetes and presents a consistent, standard pattern for building and deploying serverless and event-driven applications. It also removes the overhead that often comes with this new approach to software development while abstracting away complexity around routing and eventing.


  • author

    About the author

    Dumitru is Senior Agile Software Engineer at TribalScale, a certified Google Cloud Architect, and has an extensive background in Android development. In his previous roles, he has led multiple mobile and OTT projects for top airline, banking, media, and video-on-demand clients, and has helped build and deliver digital products to millions of users.

    TribalScale is a global innovation firm that helps enterprises adapt and thrive in the digital era. We transform teams and processes, build best-in-class digital products, and create disruptive startups. Learn more about us on our website. Connect with us on Twitter, LinkedIn & Facebook!

    Visit Us on Medium

    You might also be interested in…