Deploying the Istio Service Mesh on OKE

Introduction

Many organizations today are choosing to deploy their applications using a microservice architecture, so what exactly is a microservice architecture?

Microservices – also known as the microservice architecture – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. (Chris Richardson; http://microservices.io)

These microservices are normally deployed on a container engine, such as Kubernetes. There are many cloud vendors that offer managed Kubernetes services for the deployment of these microservices architectures, including Oracle. Oracle recently released the managed Kubernetes product, Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE).

As this number of deployed microservice applications increases the need to monitor, manage, and secure these applications becomes more important. Kubernetes provides limited capabilities in these areas; therefore, a more robust service mesh is available from Istio. This discussion covers the installation and use of the Istio service mesh on the Oracle Container Engine for Kubernetes.

Let’s first cover the Istio service mesh at the 10,000 foot level. If the reader desires a much more in-depth understanding of the Istio service mesh then I recommend you visit the Istio website. Following the overview, we’ll cover the installation of Istio on the OKE platform, and finally deploy an application to demonstrate the configurations, dashboards, and features of the service mesh.

Istio Service Mesh

From the Istio website:

“Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code. Istio gives you:

• Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
• Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
• A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
• Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
• Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.”

The service mesh consists of many moving parts. One of the key components in the service mesh, and the component that is critical for the mesh to monitor, manage, and secure the microservices is the sidecar implemented by Envoy. Since the sidecar is such a key component of the service mesh and the telemetry data collector let’s briefly digress and gain an understanding of what the sidecar does.

The sidecar pattern gets its named from the sidecar that is attached to a motorcycle. A sidecar application is deployed alongside and attached to each microservice service that you have developed and deployed. With the Istio service mesh the sidecar is an Envoy proxy that mediates all inbound and outbound traffic for all services in the service mesh. Envoy has many built-in features such as:

• Dynamic service discovery
• Load balancing
• TLS termination
• HTTP/2 and gRPC proxies
• Circuit Breakers
• Health checks
• Staged rollouts with %-based traffic split
• Fault injection
• Rich metrics

The Envoy deployment allows Istio to extract signals about traffic behavior as attributes. Istio in turns uses these attributes to enforce policy decisions, and sends them to monitoring systems to provide information about the behavior of the entire mesh.

Starting with the 0.8 release of the service mesh you can now configure Istio to do automatic sidecar injection. However, for the automatic sidecar injection to take place you must enable the application’s namespace. I will discuss this later.

Before I get too far ahead of myself, let’s get the service mesh installed in OKE. Then I will show you some of the features and explain why a service mesh is important.

Preparations – Create an OKE Cluster and Prepare your Terminal:

1. If you have not already done so, create an OKE cluster

2. Install kubectl on your local machine.

3. Install Helm on your local machine.

4. Do a quick upgrade of Tiller (just to be sure you are on the latest release)

a. $ helm init –upgrade

5. Install and configure OCI-CLI to access the OKE from the command line.

6. Download the kubeconfig so that you can access the OKE cluster from the command line using kubectl

a. Use the following command to download the kubeconfig and store it in a local file: oci ce cluster create-kubeconfig –cluster-id <cluster ocid> –file kubeconfig

i. Cluster ocid: Is the ocid of your Kubernetes cluster

ii. File: is the name of the file where you want to store the cluster configuration

b. export KUBECONFIG=<location of the kubeconfig file>

7. Create a Role-Based Access Control policy

a. $ kubectl create clusterrolebinding <admin-binding> –clusterrole=cluster-admin –user=<user-OCID>

i. admin-binding: Is any string that you want, such as “adminrolebinding”

ii. user: Your user OCID

There are four different methods that can be used to install Istio in OKE.  I recommend using Helm charts to do the installation.  Helm is a tool to help you manage Kubernetes applications.  Helm Charts, as they are referred to, help you define, install, and upgrade complex Kubernetes applications.

I suspect the use of Helm charts will be the preferred method going forward and the Istio documentation also makes that recommendation today.

Once the prerequisites have been completed you can install Istio.  You can download Istio by executing the below command:

curl -L https://git.io/getLatestIstio | sh –

 I will cover the installation of Istio using Helm below.  Prior to performing the installation, let’s make some changes to the Istio “values.yaml” file.  The “values.yaml” file informs Helm which components to install on the OKE platform.   The “values.yaml” file is located at:“/<istio installation directory>/install/kubernetes/helm/istio”

In order to have the components Grafana, Prometheus, Servicegraph, and Jaeger deployed, the “values.yaml” file needs to be modified.  For each of the components you want deployed, change the enabled property from “false” to “true”.

Servicegraph:

enabled: true

replicaCount: 1

image: servicegraph

service:

name: http

type: ClusterIP

externalPort: 8088

internalPort: 8088

You’re now ready to install Istio.

If you are using a version of Helm prior to 2.10.0 then you must install Istio’s Custom Resource Definitions via the kubectl apply.  After command execution you will have to wait a few seconds for the Custom Resource Definitions (CRDs) to be committed in the kube-apiserver.

$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

Once the CRDs have been deployed you can install the Istio service mesh.

$ helm install install/kubernetes/helm/istio –name istio –namespace istio-system

The helm install command will configure your cluster to do automatic sidecar injection.  In fact, automatic sidecar injection is the default. To verify that your istio installation was successful execute the kubectl command and ensure you have the following containers deployed to your cluster.

$ kubectl get pods -n istio-system

Since the “values.yaml” was modified to enable the deployment of Grafana, Prometheus, ServiceGraph, and Jeager you will see those components deployed as well.

While Istio states there is automatic sidecar injection; there is a caveat to this.  Automatic sidecar injection must be specified per namespace; therefore, if you do not enable your namespace for automatic injection then the sidecar will not be injected into your pods.  I do not recommend enabling the default namespace for automatic sidecar injection; however, for this blog we will ignore my own recommendation.

If you are wondering why I recommend not to set the default namespace for automatic injection it is primarily a personal preference, but there may be some components that get deployed to the default namespace and you don’t want the sidecar deployed alongside of the component.  It would be better to deploy your application to a specified namespace and then set this namespace for automatic injection.

In order to have sidecar injection at deployment you must enable the namespace for your application.  To enable the namespace for automatic injection execute the following command:

$ kubectl label namespace default istio-injection=enabled

You now have the Istio service mesh installed and are ready to begin monitoring, managing, and securing your services.  In order to show some of these capabilities let’s deploy an application, execute the application, and show what is available in the graphs provided out-of-the-box by the service mesh.

Running the Book Information Application

The easiest thing to do at this time is to deploy the “bookinfo” application.  You can find this application in the samples directory from the Istio download, which was done earlier.  Keep in mind that we previously enabled automatic sidecar injection during the installation of Istio and also enabled the default namespace for automatic sidecar injection.  Therefore, when you deploy the book application an Envoy sidecar proxy is deployed in each pod.  Each of the black boxes in the below diagram are instances of the Envoy proxy sidecar.  When the “bookinfo” application is deployed to the Kubernetes cluster Istio deploys the sidecar in the pod alongside of the microservice.

 

Let’s deploy the bookinfo application.

$ kubectl apply -f /<istio installation directory>/samples/bookinfo/platform/kube/bookinfo.yaml

After the successful deployment, let’s take a look at the pods that were deployed.

$ kubectl get po

The 2 pods curl-775f9567b5-w7btf and oke-efk-sz-elasticsearch-0 are pods that are not part of the bookinfo deployment.  These are pods that I had installed on a previous occasion.

It is important to take note that the “READY” column states 2/2,which means there are two containers in the pod and both are up and running.  But wait, the application only deployed one image in the pod.  The additional container is the sidecar proxy.  Let’s do a describe on the pod to see which containers are in the pod.

$ kubectl describe po productpage-v1-f8c8fb8-gshrl

The name of your product page pod will be slightly different.

A snippet of the describe option is shown.  Take a look at the information within the Containers section.  The data shows two images – the product page and the istio-proxy (Envoy proxy).

The last thing to do is to make the application accessible from outside your Kubernetes cluster.  To do that, we need to create an Istio gateway.

 $ kubectl apply -f /<istio installation directory>/samples/bookinfo/networking/bookinfo-gateway.yaml

 $ kubectl get gateway

$kubectl get svc -n istio-system

The output will look as follows:

You can render the application from a browser. The IP address is provided by looking at the istio-ingressgateway’s external-ip.  Accessing the bookinfo application is as easy as by providing the istio-ingressgateway’s external-ip followed by the path of productpage.  As you can see from the ports, the gateway will be listening on port 80.  Continuously refreshing the browser will send traffic to the services.

At this point we need to exercise the application so we can generate traffic and demonstrate the features of the dashboards.

Available Dashboards

When you install Istio, with all of the dashboards enabled, there will be 4 dashboards available, in addition to the standard Kubernetes dashboard.  Each dashboard provides their own unique features and will be key for managing and monitoring your Kubernetes cluster.  Since each dashboard is a product in its own right I will not cover each product in depth.  To understand the key features of the dashboards I recommend that you review each product’s documentation page.  There are also several books that have been written on many of these products.

Grafana

The Grafana add-on is a preconfigured instance of Grafana. The base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend.

The Istio Dashboard consists of three main sections:

1. A Mesh Summary View: This section provides Global Summary view of the Mesh and shows HTTP/gRPC and TCP workloads in the Mesh.

2. Individual Services View: This section provides metrics about requests and responses for each individual service within the mesh (HTTP/gRPC and TCP). Also, give metrics about client and service workloads for this service.

3. Individual Workloads View: This section provides metrics about requests and responses for each individual workload within the mesh (HTTP/gRPC and TCP). Also, give metrics about inbound workloads and outbound services for this workload.

Setup the dashboard:

$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}’) 3000:3000 &

Access the dashboard: http://localhost:3000/dashboard/db/istio-mesh-dashboard

The Istio service mesh delivers six Grafana dashboards.  It is not possible to cover the dashboards in-depth as part of this blog.  I will only show snapshots of each of the dashboards.  I will leave it to the reader to dive deep into each of the dashboards.

Istio Mesh Dashboard

Istio Galley Dashboard

Istio Service Dashboard

Istio Workload Dashboard

Istio Mixer Dashboard

Istio Pilot Dashboard

Prometheus

The Prometheus add-on is a Prometheus server that comes preconfigured to scrape Mixer endpoints to collect the exposed metrics. It provides a mechanism for persistent storage and querying of Istio metrics.

The configured Prometheus add-on scrapes three endpoints:

1. istio-mesh (istio-mixer.istio-system:42422): all Mixer-generated mesh metrics.

2. mixer (istio-mixer.istio-system:9093): all Mixer-specific metrics. Used to monitor Mixer itself.

3. envoy (istio-mixer.istio-system:9102): raw stats generated by Envoy (and translated from Statsd to Prometheus

$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}’) 9090:9090 &

To access the Prometheus dashboard: http://localhost:9090/graph.  There are a number of predefined queries that you can run to investigate your applications.  The query shown in the view was the “istio_requests_total” query.

Jaeger

Jaeger is used for monitoring and troubleshooting microservices-based distributed systems, including:

1. Distributed context propagation

2. Distributed transaction monitoring

3. Root cause analysis

4. Service dependency analysis

5. Performance / latency optimization

$ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}’) 16686:16686 &

To access the Jaeger dashboard: http://localhost:16686/

For one of the invocations click on the “Span” button.  This will provide the following view:

Service Graph

Servicegraph is a small app that generates and visualizes graph representations of your Istio service mesh. Servicegraph is dependent on the Prometheus addon and the standard metrics configuration.

$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}’) 8088:8088 &

Access the dashboard: http://localhost:8088/force/forcegraph.html

The Kubernetes dashboard is started the typical, old fashioned way:

kubectl proxy &

Usually to access the Kubernetes dashboard from a browser you would enter http://localhost:8001/ui.  However, this URL is now deprecated.  To access the Kubernetes dashboard in OKE access the following URL from a browser: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

As you can see from the many dashboards there is a large number of metrics being captured as the services are executed.  Since each dashboard captures specific data you should take the time to investigate each of the dashboards.  Understanding each of the dashboards will help you monitor, manage, and investigate any issues that arise during your microservice application processing.

Summary

The use of the Istio service mesh provides several features for monitoring, managing, and securing your deployed microservices.  As your number of microservices increases it will become more important to deploy a service mesh such as Istio.

The service mesh provides visibility at the service edge but does not provide the ability to “peek” into the application.  In order to have a better view of the application behavior and application troubleshooting then tools such as elasticsearch, Fluentd, and Kibana are some products to consider.  There are many others; therefore, I recommend you look at other offerings as well.

There are many open-source software (OSS) products to manage your application.  The Istio service mesh is key to doing that and appears to be the front runner among the service mesh tools.

Add Your Comment