Vitess is the one of the latest graduate of cloud native projects on Cloud Native Computing Foundation. This project has been developed originally inside YouTube and since the beginning, it has played the role of the leading project for database clustering system for horizontal scaling of MySQL technology. It means Vitess combines the scalability of a NoSQL database with all of the features of MySQL RDBMS. Natively MySQL does not include sharding capabilities, but with Vitess you can implement this feature and application will be fully ignorant of database distributed topology. Clustering features of Vitess will also help to eliminate MySQL high-memory connection overheads, as the workload would be distributed between multiplied MySQL instances. Vitess can be installed on bare-metal servers or can be deployed in cloud-native environment of Kubernetes Cluster. In the original Vitess documents there is only a procedure on how to install it in Kubernetes Minikube environment. In this blog post, I will explain how it is easy to do the same installation in Oracle Cloud Infrastructure Kubernetes Engine (OCI OKE). For the purpose of deployment I will use kubectl and helm utilities. Access to OCI OKE will be done from Linux client machine, equipped with properly configured Oracle Cloud Infrastructure command-line interface (OCI CLI).
Vitess is composed of the runtime part and admin part. The admin part supports the management function and will not be explored in this blog. The main engine, the runtime part of Vitess is responsible for dispatching queries from the application level to the distributed data backend. The application traffic is routed via vtgate component which works as a lightweight proxy. The routed workload is distributed between tablets entities, which consist of vttablet servers and the number of mysqld processes in most of the cases located on the same physical server. Worth to mention that in Kubernetes deployment of Vitess, all components are deployed as separated pods and are interconnected via Kubernetes internal network. According to Vitess terminology, the database is called keyspace and parts of the keyspace can create multiplied shards. In Vitess architecture tables can be distributed in horizontal shards or different tables can be located in database instances (vertically split). In both scenarios, data are stored in multiplied MySQL replicas and can be moved from one place to another when the application is blissfully ignorant about real database topology behind the scene. Detailed information about database topology is stored in the metadata store which is supported by etcd clustered key-value database.
Procedure how to install
Before any furthers steps we need to provision OCI OKE. We will utilize for this purpose OCI Console and Quick Create workflow which has been introduced to OCI Console some time ago. As a platform we will use the latest version of Kubernetes 1.14.8, which has been boarded recently to OCI.
STEP 1. Log in to OCI Console, click on hambuger menu in a top left corner. Next from in the manu go to Developer Services, and then to Container Clusters (OKE):
STEP 2. Next Click on Create Cluster button and with the usage of Quick Create launch the configuration workflow:
STEP3. Fill in the wizard form. For simplicity of configuration choose public visibility type. Kubernetes should be 1.14.8. For shape choose the smallest possible. In our case it will be VM.Standard.E2.1, but this is just a demo configuration and for production purposes bigger shapes should be chosen. Number of worker nodes should be 3 minimum, but you can plan more for more then testing purposes.
STEP 4. In add-ons section of the wizard enable Tiller for Helm support and then click Next button:
STEP 5. In a Review step of the workflow, check the resources and then click on Create Cluster button.
STEP 6. You need to wait until cluster will be in ready state and all worker nodes will be active. This takes something about 5 to 10 minutes before you can go further:
STEP 7. On the client machine (it can be Linux or MacOS) you need to install OCI CLI. It has been well documented under this URL.
STEP 8. In OCI Console click Access Kubeconfig button and follow the instructions in a popup window:
STEP 9. Verify your access to OCI OKE from kubectl utility level. Initial check will require to verify kube-system namespace. You need to check if every Pod in Kubernates is in a running state. On the other have services should be also exposed via ClusterIPs.
STEP 10. Install helm according to the procedures available here: https://helm.sh/docs/intro/install/
STEP 11. Download etcd-operator project from GitHub by cloning the repo:
STEP 12. Download vitess project from GitHub by cloning the repo:
STEP 13. Go to etcd-operator directory and run example/rbac/create_role.sh script:
STEP 14. Next you should deploy etcd-operator with kubectl command:
STEP 15. Verify etcd-operator readiness by executing kubectl get pods command (expected status is Running):
STEP 16. Now go to vitess cloned repository and then go further into examples/helm directory:
STEP 17. Next run helm install with initial chart as follows:
STEP 18. After few minutes, all pods should be ready (Running/Completed Status):
In this blog I have shown how to install Vitess in OCI OKE enviroment. It is a first part of the series. In the next blog post I will show how to perform veritical split of tables between different tablets and then how to create horizontal shards.