Developing SaaS Extensions using VBCS and Helidon Micro-services Part 2

April 8, 2020 | 5 minute read
Angelo Santagata
Text Size 100%:


In my Developing SaaS Extensions using VBCS and Helidon Micro-Services part 1, I focused on the creation of a Helidon based microservice which we tested locally. This blog post takes the reading through to the next step which is to deploy the microservice on Oracle Kubernetes Service and then explain how this service can be called from VBCS in a way that the microservice is aware of the user (aka identity propagation from VBCS to the microservice)

In the example previously created in part 1 when the example was tested from Postman the Helidon microservice was able to be called and the microservice responded with the "user" who called it.

Deploying the Microservice to Oracle Kubernetes Service

Before proceeding it is a requirement that you have setup your local environment so that you can deploy a simple docker container to Oracle Kubernetes. For more information see this learning path and Getting Started with Kubernetes Clusters on OCI.

The primary steps are

  1. Create a docker image, using a tag name which includes the local OCIR
  2. "Push" the docker image to the Oracle Containers Image Repository (aka OCIR) 
  3. Deploy the Kubernetes YAML file
  4. Check to see its all worked

Step 1:  Create a docker image

The Helidon guys are nice chaps and as part of the project quickstart they have created a "vanilla" Docker file which you can use to create the container. This docker file does all the work you need for the container to be created. The important bit here is that the docker image must be "tagged" with a tag name which we can reference later. 

The commands for building the container are 

Step 2 : Push the docker image to the OCIR

Once the docker image has been built, and tagged, we can now push this image to the OCIR image repository in your OKE environment, this is important because when Kubernetes pulls the images from the OCIR system at runtime. Execute this script from the same directory as the pom.xml file.

docker build --tag $TAGNAME .

TAGNAME is a concatenation of the OCIR address and your OCIR repository name. E.g. My repository name could be

Once the build has completed you can push container to your OKE cluster using the following command

docker push $TAGNAME

Step 3 :  Deploy the Kubernetes YAML

Before deploying the sample you will need to edit your app.yaml.

a) Make sure the "names" are all in lower case

b) In the deployment name there is a image URL, change this so that it points to your OCIR image , i.e. the tagname above

c) To make things easy instead of using a nodeport you can change the service to use a LoadBalancer. This does bind a single OCI loadbalancer to this image but for testing its ok. For "proper" production you should be using a nodePort approach of perhaps ISTIO but that is left to the reader.

My app.yaml file looks like this

# Copyright (c) 2018, 2019 Oracle and/or its affiliates. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.

kind: Service
apiVersion: v1
  name: securitysample
    app: securitysample
  type: LoadBalancer
    app: securitysample
  - port: 8080
    targetPort: 8080
    name: http
kind: Deployment
apiVersion: extensions/v1beta1
  name: securitysample
  replicas: 1
        app: securitysample
        version: v1
      - name: securitysample
        imagePullPolicy: IfNotPresent
        - containerPort: 8080
This can now be deployed using the following command
kubectl apply -f app.yaml

Step 4 :  Checking its all worked

Assuming this applied correctly you can check to see if the pod is deployed correctly using the kubectl get pods command, once the pod is ready give it a few seconds and the load balancer will come online and you'll get a external IP address you can call.

You can now call the service from postman in the same way you called it locally but this time using the EXTERNAL-IP as the IP Address.

If it helps, I've created the following short shell script which helps compiles, deploys and get the external IP all in one go..
set -e
mvn  package
docker build --tag $TAGNAME .
docker push $TAGNAME
#set +
kubectl delete deployment mysecuritysample
set -e
kubectl apply -f app.yaml
echo "Waiting for deployment to complete + Load Balancer,usually takes 10s"
sleep 10
echo "Load Balancer : $(kubectl get services | grep securitysample | awk {'print $4'})"
kubectl get deployments | grep securitysample

You can now test this in the same way we tested the service running locally using postman. The URL will be something like  http://<ipaddress>:8080/greet/whoami. For more information see the last section in the previous blog 

Calling The MicroService from Oracle Visual Builder

Finally calling it from Visual Builder simply requires you to create a connection to the REST Service and setting the security connection appropriately. For this example I will be using the OAuth User Assertion and entering the OAuth Client ID, OAuth Client Secret and other details.

  1. Within your VBCS App, navigate and create a new connection 

  2. Select Define by Endpoint
  3. Enter the URL, e.g. http://<ipAddress>:8080/greet/whoami and select "Get One"

  4. Enter the OAuth Details

  5. And then finally execute the "test" to get a payload. If all goes well then it should return a single record with your username. This value has come from the Helidon server indicating that it "knows" who you are and we're done..

Next Steps

This concludes this two part series of showing how you can use Helidon to create a micro-service back-end for a VBCS applications. The next logical steps for the reader is to start building out the back-end business logic  in Helidon and then building a VBCS user Interface based on it.






Angelo Santagata


25+ years of Oracle experience, specialising in Technical design and design authority of Oracle Technology solutions, specialising in integrating technology with Oracles SaaS products.

Extensive credible communication skills at all levels, from hard core developers to C-Level executives.

Specialities: Oracle Fusion Apps Integration, Oracle Cloud products, SaaS Integration architectures, Engaging with SIs & ISVs, Technical Enablement, Customer Design Reviews,  advisory and project coaching.

TOGAF 9 Architect and Oracle Cloud Infrastructure Architect Certified

Previous Post

Installing Tomcat on Oracle Linux in Oracle Cloud

Emma Thomas | 5 min read

Next Post

FastConnect Troubleshooting

Javier Ramirez | 8 min read