X

Best Practices from Oracle Development's A‑Team

Using OCI Bastion Service to Manage Private OKE Kubernetes Clusters

Steve White
Field CISO

In this blog post from March, 2021 Oracle announced support for fully private Kubernetes clusters when using Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE). Then in May, 2021 we announced the release of the Oracle Cloud Infrastructure (OCI) Bastion service. The OCI Bastion is a fully managed service providing secure and ephemeral Secure Shell (SSH) access and port-forwarding over SSH tunnels to the private resources in OCI, eliminating the need to manage dedicated "jump hosts" to access these resources. In this blog post I review the steps needed to use OCI Bastion service so a developer can run kubectl commands from their local workstation against an OKE cluster with a private IP address. 

Figure 1 below provides a conceptual overview of the connection. We're creating a port-forwarding session from the developer's local workstation to the Kubernetes API port on a private IP address sitting inside an OCI Virtual Cloud Network (VCN). The developer will configure their Kubernetes config file to point at a specified port on their local workstation, and SSH will forward that traffic via the OCI Bastion  to the target cluster.

Figure 1: SSH port forwarding notional diagram.

The overall steps to set this up are:

  1. Create an OCI Bastion
  2. Create a kubeconfig file on the developer's workstation used to access the OKE cluster and edit the kubeconfig file to point to the local SSH tunnel
  3. Create a port-forwarding session on the bastion that forwards traffic to the target IP address and port
  4. Connect an SSH tunnel from the developer's workstation to the bastion session
  5. Profit (run kubectl commands against the target cluster)

In this post I assume the reader already has a private OKE cluster they need to interact with. Refer to the OKE documentation here for information on creating and managing OKE private clusters. The screen shot below shows the properties of the OKE cluster I'll be using for this example. Note the IP address and port listed under "Kubernetes API Private Endpoint" as these values will be used later when creating the OCI Bastion session for accessing this cluster.

OKE Cluster info screen   

Figure 2: OKE Cluster information

Step 1: Create the OCI Bastion

The following images show the screens used to create the OCI Bastion from the OCI Cloud Console. This is performed by an administrator who has been granted the IAM permissions required to to manage bastions.  When creating the bastion, the target VCN and target subnet should correspond to the VCN and subnet where the OKE cluster resides (the subnet is shown in the example screenshot above in the "Kubernetes API Endpoint Subnet" field). Additionally, make sure the security list for the subnet allows ingress traffic on port 6443.

 Figure 3: Create a bastion from the Identity and Security menu of Console

Figure 4: Create a bastion with Allowlist and Time-to-Live control

Step 2: Create the kubeconfig file on the developer's workstation used to access the OKE cluster

Because this step (and all the following steps) are typically performed by the individual developer from their workstation, the remaining examples use the OCI CLI and assumes the developer has a working OCI CLI environment with an oci_cli_rc file that specifies the compartment OCID being used for the OCI CLI commands, etc.
oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaaaaae... --file $HOME/.kube/config  --region us-phoenix-1 --token-version 2.0.0 --kube-endpoint PRIVATE_ENDPOINT

Where cluster-id is the full OCID of the OKE cluster being targeted, and region is the applicable OCI region identifier from the list maintained in the OCI documentation. This will create (or add to) the configuration file specified in the "--file" option.

Open that file in your favorite text editor and look for the "server:" line that has the private IP address and port number for this OKE cluster and replace the IP address with "127.0.0.1" leaving the port at the current value. For the cluster used in this example, the original server entry is "server: https://10.0.1.132:6443" and the modified server entry is "server: https://127.0.0.1:6443.

Note: Steps 1 and 2 in this process are persistent, that is, unless the bastion or cluster are deleted or moved, those steps do not need to be repeated each time the developer needs to run commands against the cluster. The remaining steps will need to be repeated each time the developer needs to access the cluster, as the bastion sessions and the SSH tunnels will timeout. 
 

Step 3: Create a port-forwarding session on the bastion 

$ oci bastion session create-port-forwarding --bastion-id ocid1.bastion.oc1.phx.amaaaaaa... --display-name sdw-to-oke-tunnel --ssh-public-key-file /home/sdwhite/.ssh/sdwhite-oracle.pub --key-type PUB --target-private-ip 10.0.1.132 --target-port 6443

The following are the parameters for this command:

  • The bastion-id is the OCID of the bastion created in step 1
  • The ssh-public-key-file is the public key portion of an SSH key pair the developer has, they will use the private key to actually connect to this tunnel.
  •  The target-private-ip and target-port are the ones obtained from the "Kubernetes API Private Endpoint" field for the OKE cluster being targeted.

Assuming this command completes successfully, the resulting JSON output will contain an "id" field which will be used to connect the SSH tunnel. The following is a snippet from this output:

  "data": {
    "bastion-id": "ocid1.bastion.oc1.phx.amaaaaaa...",
    "bastion-name": "sdwbastiondemo1",
    "bastion-public-host-key-info": null,
    "bastion-user-name": "ocid1.bastionsession.oc1.phx.amaaaaaa...",
    "display-name": "sdw-to-oke-tunnel",
    "id": "ocid1.bastionsession.oc1.phx.amaaaaaa...",
    "key-details": {
    "public-key-content": "ssh-rsa "

Step 4: Connect an SSH tunnel from the developer's workstation to the bastion session

The easiest way to capture the SSH command that can be used to create the tunnel is to retrieve the bastion session information using the "oci bastion session get" command as shown below, where the session-id is the "id" field returned in the output of the prior command as shown above.

$ oci bastion session get --session-id ocid1.bastionsession.oc1.phx.amaaaaaa...
{
  "data": {
    "bastion-id": "ocid1.bastion.oc1.phx.amaaaaaa...",
    "bastion-name": "sdwbastiondemo1",
    "bastion-public-host-key-info": null,
    "bastion-user-name": "ocid1.bastionsession.oc1.phx.amaaaaaa...",
    "display-name": "sdw-to-oke-tunnel",
    "id": "ocid1.bastionsession.oc1.phx.amaaaaaa...",
    "key-details": {
      "public-key-content": "ssh-rsa REST OF KEY"
    },
    "key-type": "PUB",
    "lifecycle-details": null,
    "lifecycle-state": "ACTIVE",
    "session-ttl-in-seconds": 1800,
    "ssh-metadata": {
      "command": "ssh -i <privatekey> -N -L <localport>:10.0.1.132:6443 -p 22 ocid1.bastionsession.oc1.phx.amaaaaaa...@host.bastion.us-phoenix-1.oci.oraclecloud.com"

The "ssh-metadata" field contains the SSH command to use.

$ ssh -i /home/sdwhite/.ssh/privatekey.priv -N -L 6443:10.0.1.132:6443 -p 22 ocid1.bastionsession.oc1.phx.amaaaaaa...@host.bastion.us-phoenix-1.oci.oraclecloud.com &

This will prompt for the passphrase on the SSH key (because you should always have a passphrase on the SSH key) and then the command will go into the background. You can also run this command in the foreground by removing the "&" at the end, and then run the kubectl commands from another window.

Step 5: Run commands against the cluster

This part is easy, at this point the kubectl command should work properly against the cluster. In addition, any other utilities that get their connection and authentication information from the kubeconfig file specified earlier in the process should work properly. Here's the output for this example:

$ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Join the discussion

Comments ( 1 )
  • Marcello Tuesday, July 13, 2021
    Thanks! very usefull
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha