Building scalable cryptographic applications using OCI Dedicated Key Management Service (DKMS)

March 8, 2024 | 7 minute read
Ty Stahl
Cloud Security Architect
Text Size 100%:

Every journey to the cloud has moments when you need to look across your entire application portfolio and answer a simple question – “can I do that in the cloud?”  For the majority of those moments, the answer is an unequivocable – yes. Additionally, you are able to capitalize on a series of added benefits such as modernized cloud-native solutions, performance improvements, and built-in management tools.  However, there can be that one bespoke application or service in your architecture that does something “special” – say, for example - it performs cryptographic operations against a dedicated FIPS 140-2 Level 3 device?  Oh, and did I mention - it has to be extremely fast, as well?

Fortunately if you have found yourself in that very situation, then this blog is going to be for you.  The past year has been very active for the OCI Key Management and Encryption services portfolio; more specifically about what I will cover today - is the realease of the OCI Dedicated Key Managed Serviced(DKMS).

The reference use case that I will demonstrate is focused on showing a secure architecture while running a scalable microservice deployment for simple cryptographic, high-volume transactions with DKMS. In this case, I simply built two RESTful microservices – encryption and decryption of a string passed in via a HTTP header.  Nothing incredibly flashy; keep in mind – this is a blog about cryptography.

Architecture

I chose to build this application to run in an Oracle Kubernetes Engine(OKE) cluster for a few reasons.  First, I wanted to leverage a few of the features of running this application at-scale. To do that, I needed to build the container images by installing the DKMS daemon – which is crucial for programmatically interfacing using PKCS#11. I also wanted to test out the OKE Native Ingress Controller to handle the URL path-driven traffic being directed appropriately to the 2 backend microservices.

DKMS Reference Architecture

Components:

DKMS Cluster – this is the HSM cluster deployed that can only be accessed from the Oracle Services Network though a Service Gateway.  First you will need to provision DKMS cluster, if you do not have one already - follow this documentation.

Oracle Kubernetes Engine (OKE) – I used a custom created OKE cluster using the OCI workflow which created a private subnet where the worker nodes will reside and have access to the Service Gateway.  It also created a public subnet so that I can access my web services via the internet

OKE Native Ingress Controller – Using this blog, I was able to create an ingress with a single OCI Load Balancer which attaches certificates to the listeners from the OCI Certificates Services

DKMS Client – Within the OKE pods, I needed to install the DKMS client which is used by the application to communicate via PKCS#11 to the DKMS service.  I installed the DKMS client RPMS while building the container image using the docker file.

OCI Secrets in Vault – Since the DKMS is basically network attached hardware, the authentication mechanisms rely on a combination of mTLS for channel authentication -and Username/Password for application users.  Application users are further referred to as CryptoUser(CU). Secure programming principles state that I need to store these sensitive credentials as a protected secrets. From there, I can authorize my applications/pods to have access the credentials to use for authentication to the DKMS cluster controlled via IAM Policy.

One last component not depicted is the management client interface, which I actually had installed on an OCI compute instance separately.

Setup and Microservice Creation

The setup for this application was pretty simple.  First thing that I need to do is to setup the DKMS cluster using the client utilities that come with the DKMS services.  To do that, I used the following to create my CU:

Reference the DKMS documentation for User Account Management to learn more about the different types of users for DKMS and how to manage them.  You will create a CryptoUser - which is the application account that will be used from the microservice to connect to the HSM.

Now, you can login as the newly created CU and create the first key using the following steps for the - 'crypto_user': 

loginHSM

 

Reference the DKMS documentation for the Key Management Utility which explains how manage the various key operations and cryptographic options.  For this demo, I just simply used an AES 32-bit Symmetric key.

CreateAESKey

You need to make a note of the Key Handle - that is how you will refer the application to what key to encrypt the plaintext.  Subsequently, to successfully decrypt the cipher text - you must use the same key handle to get the correct plaintext back.

Next, you will need to build you application which is going to use the PKCS#11 interfaces to work with DKMS.  For that, I chose to use a combination of native languages-  GOLang and C - to create the sample microservice.  Since I am building a RESTful HTTP service, I need to ensure I am able to pass around friendly HTTP characters.  For that, I built-in some base64 encoding/decoding to ensure I can safely pass parameters.

Once we have the microservice working locally, I need to package it up into a a container so that I can deploy it to my OKE cluster.  To build the container, I used the following example docker file to create the image for upload to the repository.  The key part that I wanted to show was the installation of the DKMS client and PKCS#11 daemon within the container.  This ensures that the application can find load the shared libraries when the microservice is started.

DKMS Docker File

 

Once the image has successfully been built, the last thing to do is simply deploy it onto OKE cluster.  As a reminder, I am using the OCI Native Ingress Controller to achieve this deployment model.  As you can see in the following snippet, I have registered 2 services - one for encrypt and one for decrypt which will route based off of the URL paths.

DKMS Ingress

The last step is to ensure that we have the credentials stored in the OCI Vault.  As I mentioned before, the application will need to use the CU username and password in order to issue PKCS#11 operations to the HSM.  Fortunately, this is easy through the help of the OCI SDK - and there are several blogs that refer how to achieve this.  As an example, I will refer the following blog because it shows the setup of both the Secret and the OCI IAM Instance Principal policy.

Note: Since this OKE cluster is only a demo designed to showcase the value of DKMS - all services in my cluster can access the OCI Vault Secret.  Therefore, the use of Instance Principal is a shortcut.  However, if your OKE clusters have a variety of services running - you may only want to restrict Secret retrieval to those who actually need it.  To solve that problem, take a look into OKE Workload Identity which will allow a fine-grained access control capability for OCI services from pods.

Testing the crypto microservices

Once all of the prerequisite components are now in place - we can test our services using any simple HTTP tool - such as postman.  As you can see, the microservice is pretty straight forward considering what is required to validate the functionality. 

I am passing in two HTTP headers.  Firstly, the Host header is required to ensure that Ingress Routing conditions are identify this string. In addition to the host header, the /encrypt path is the key condition for the ingress controller to route it to the appropriate microservice/pod in OKE.  Lastly, I am using the HTTP header x-ciphertext as my input to the encryption service. 

Upon sending this request, a JSON response is returned with base64 encoded string embedded in the payload.  Again, this is because the service would return a series of unfriendly characters outside of the ASCII world - which would not allow this service to be very portable or usable.

Encrypt

For testing the decyption - you can duplicate the previous call and simply substitute the path with /decrypt.  This will - you guessed it - ensure that our service invocation is directed to the decryption microservice.  From the previous output, copy the base64 encoded string into the x-ciphertext header - and voila.

Decrypt

As a final observation, I want to highlight the response times for each of the microservice invocations.  Both are showing under 140 milliseconds, which is a variable response time given a lot of contributing factors that go into this particular test that I am executing from my local desktop.  More importantly, this also does not consider the lazy and inefficient coding I created for this demo microservice.

So, if you find yourself stuck in limbo asking yourself the question - "will my application be able to perform high-performance cryptography in Oracle Cloud?" - I hope this blog has answered that question - with a resounding yes.

 To find more information on OCI Dedicated KMS Services and other services within our encryption portfolio:

OCI Key Management FAQ

OCI's Full Key Management Portfolio

Ty Stahl

Cloud Security Architect


Previous Post

Advantages of Federating OCI with an Enterprise IDP

Dinesh Maricherla | 4 min read

Next Post


Troubleshooting IPSec VPN Connections using IKEv2 in OCI

Raffi Shahabazian | 4 min read