Identity and Cloud Security A-Team at Oracle Open World

I just wanted to let everyone know that Kiran and I will be presenting with our good friend John Griffith from Regions Bank at Oracle Open World next week. Our session is Oracle Identity Management Production Readiness: Handling the Last Mile in Your Deployment [CON6972] It will take place on Wednesday, Sep 21, 1:30 p.m. […]

Configuring Oracle Public Cloud to Federate with Microsoft Azure Active Directory

Introduction Companies usually have some Identity and Access Management solution deployed on premises to manage users and roles to secure access to their corporate applications. As business move to the cloud, companies will, most likely, want to leverage the investment already made into such IAM solutions and integrate them with the new SaaS or PaaS applications that […]

Avoiding LibOVD Connection Leaks When Using OPSS User and Role API

The OPSS User and Role API ( provides an application with access to identity data (users and roles), without the application having to know anything about the underlying identity store (such as LDAP connection details). For new development, we no longer recommend the use of the OPSS User and Role API – use the Identity […]

Improve SSL Support for Your WebLogic Domains

Introduction Every WebLogic Server installation comes with SSL support. But for some reason many installations get this interesting error message at startup: Ignoring the trusted CA certificate “CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See,O=Entrust, Inc.,C=US”. The loading of the trusted certificate list raised a certificate parsing exception […]

Where has your LDAP connection pool gone?

Introduction You have deployed Oracle BPM and decided to run some load tests against it. You’re concerned, among other things, about the behavior of your backend LDAP server under peak times, whether it’s going to be able to handle the load or not. You check the security providers settings in Weblogic Server and see you […]

Converting SSL certificate generated by a 3rd party to an Oracle Wallet

     Recently a customer asked me how to import his private key and certificate into an Oracle HTTP Server Wallet.
The customer generated a CSR outside the OHS Wallet Manager, using Open SSL, and sent it to a CA to get his certificates issued by them.
Unfortunately, the Wallet Manager only allows you to import certificates which were created for a CSR generated by the Wallet itself.
Despite this minor limitation, there is a workaround to get your private key, certificate and CA trusted certificates chain into Oracle Wallet.
This post explains the simple steps to achieve this, with a little help from Open SSL.

  1.       What you will need:
a. openssl installed in a machine
b. The server’s certificate (PEM format)
c. The server’s encrypted private key and it’s password
d. The CA root and intermediate certificates (these must be concatenated into a single file, also in PEM format)
        2.    On a server with openssl installed, issue the following command:

openssl pkcs12 -export -in certfile -inkey keyfile -certfile cacertfile -out ewallet.p12



                certfile: is the server’s certificate
                keyfile: is the server’s private key
                cacertfile: is the CA’s concatenated root and intermediate certificates.
Note that the resulting file must be named ewallet.p12 in order to be recognized by Oracle Wallet Manager.


3      3.       Enter the private key’s passphrase when prompted for it.
        4.       Enter an export password when prompted for it. You MUST supply a non-blank password. You will need to type it again as verification.
        5.       Upload the ewallet.p12 file to the Oracle Application Server. Move it to where the OHS can access it.
        6.       Start the Oracle Wallet Manager application.
        7.      Under the Wallet menu, click Open.
        8.      You will likely receive an error message about the default wallet directory not existing, and asking you if you want to continue. Click Yes.
       9.   You will be asked to select the directory where the wallet file is located. Find the directory where you moved the file ewallet.p12 to.
      10.   You will be asked for the wallet password. Enter the export password you entered when converting the certificate.
      11.   The wallet should open, and the certificate may be displayed as empty – don’t worry about that right now. You should also see the CA certificate under “Trusted Certificates”.
      12.   Under the Wallet menu, select “Auto Login”. Verify that it was selected by viewing the Wallet menu again; the Auto Login box should now have a check mark.
       13.  Under the Wallet menu, select “Exit” to quit the Oracle Wallet Manager application.
       14.   Now you should have 2 files in the directory: ewallet.p12 and cwallet.sso. Both files must be together at the same directory so the OHS can access the wallet.
       15.   Shutdown OHS.
       16.   Modify your OSH ssl.conf (default location should look something like /home/oracle/Middleware/Oracle_WT1/instances/instance1/config/OHS/ohs1/ssl.conf) so the directive SSLWallet points to the directory where you saved both files, for example:
      SSLWallet “${ORACLE_INSTANCE}/config/${COMPONENT_TYPE}/${COMPONENT_NAME}/keystores/default

 17.   Start OHS and access its HTTPS home page. Inspect the certificate presented by the browser and you should see your new certificate and the CA chain.



Attaching OWSM policies to JRF-based web services clients

I’ve recently came across a question in one of our internal mailing lists where a person was under the impression that he would have to write code to propagate the identity when making a web service call using OWSM policies. My answer was something like: “depending on the type of your client you may have to write some very small piece of code to attach a policy, but you should not write code at all to either retrieve the executing client identity or to do the propagation itself”. Fortunately, I had an unpublished article that applied 100% to his use case. And here it is now (a little bit revamped).

OWSM (Oracle Web Services Manager) is Oracle’s recommended method for securing SOAP web services. It provides agents that encapsulate the necessary logic to interact with the underlying software stack   on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios.

Applying policies to services and clients is usually a straightforward task and can be accomplished in different ways. This is well described in the OWSM Administrators Guide. Looking from the client perspective, the docs describe how to attach policies to SOA references, connection-based clients (typically ADF-based clients) and standard Java EE-based clients using either Enterprise Manager or wlst.

Oracle FMW components (like OWSM agents) are typically deployed on top of a thin software layer called JRF (Java Required Files), providing for the required interoperability with software stacks from different vendors.

This post is a step-by step showing how to code a JRF-based client and attach OWSM policies to it at development-time using Oracle JDeveloper.

This is a 3-step process:

a) Create proxy-supporting classes;
b) Use the right client programming model;
c) Attach OWSM policy to the client;

1. Creating proxy-supporting classes

Very straightforward.
Select a project and go to File -> New and select Web Services in the Business Tier category. On the right side, choose Web Service Proxy and follow the wizard.


2. Picking the right client programming model

For clients on 11g R1 PS2 and later, use oracle.webservices.WsMetaFactory. If your client is going to run on 11g R1 or 11g R1 PS1, use the deprecated All these classes are available in jrf-client.jar file, located at oracle_common/modules/oracle.jrf_11.1.1 folder in your middleware home installation.

2.1 Code sample using (up to 11g PS1)

import javax.xml.namespace.QName;
import javax.xml.parsers.DocumentBuilderFactory;
import oracle.webservices.ClientConstants;
import org.w3c.dom.Element;
String endpoint = "";
URL wsdlURL = new URL(endpoint+"?WSDL");
ServiceDelegateImpl serviceDelegate = new ServiceDelegateImpl(wsdlURL, new QName("", "MyAppModuleService"), oracle.webservices.OracleService.class);
MyAppModuleService port = serviceDelegate.getPort(new QName("", "MyAppModuleServiceSoapHttpPort"), MyAppModuleService.class );

InputStream isClientPolicy = <Your_Client_Class_Name>.class.getResourceAsStream("client-policy.xml");
Map<String,Object> requestContext = ((BindingProvider) port).getRequestContext();
requestContext.put(ClientConstants.CLIENT_CONFIG, fileToElement(isClientPolicy));
requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpoint);

// Add other properties here. For identity switching add
// requestContext.put(SecurityConstants.ClientConstants.WSS_CSF_KEY, "<AppID_csf_key>");

// Utility method to convert an InputStream into an org.w3c.dom.Element
public static Element fileToElement(InputStream f) throws IOException, Exception {
DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
return builderFactory.newDocumentBuilder().parse(f).getDocumentElement();

In this sample, MyAppModuleService is the port interface generated by JDeveloper in the step before.

The parameters to the QName object constructor are the target namespace and the service/port name, both found in the web service wsdl.

2.2 Code sample using oracle.webservices.WsMetaFactory (11g PS2 +)

import java.util.Map;
import javax.xml.namespace.QName;
import javax.xml.parsers.DocumentBuilderFactory;
import oracle.webservices.ClientConstants;
import oracle.webservices.ImplType;
import oracle.webservices.WsMetaFactory;
import org.w3c.dom.Element;

String endpoint = "http://localhost:7101/WebServiceSample2-WebService-context-root/GreetingPort";
URL serviceWsdl = new URL(endpoint + "?wsdl");
QName serviceQName = new QName("http://sample2.webservice/","GreetingService");
QName portQName = new QName("http://sample2.webservice/","GreetingPort");

Service proxyService = WsMetaFactory.newInstance(ImplType.JRF).createClientFactory().create(serviceWsdl, serviceQName);
Greeting port = proxyService.getPort(portQName, Greeting.class);

InputStream clientPolicyStream = Servlet1.class.getResourceAsStream("client-policy.xml");
Element clientConfigElem = this.fileToElement(clientPolicyStream);

Map<String,Object> requestContext = ((BindingProvider) port).getRequestContext();
requestContext.put(ClientConstants.CLIENT_CONFIG , clientConfigElem);



In this sample, Greeting is the port interface generated by JDeveloper in the step before.

The parameters to the QName object constructor are the target namespace and the service/port name, both found in the web service wsdl.

3. Attaching the OWSM policy

The OWSM policy is passed as an org.w3c.dom.Element to the requestContext Map. One way to come up with such Element is through an XML file that contains a reference to the actual OWSM client-side policy to be used.

InputStream clientPolicyStream = Servlet1.class.getResourceAsStream("client-policy.xml");
Element clientConfigElem = this.fileToElement(clientPolicyStream);
Map<String,Object> requestContext = ((BindingProvider) port).getRequestContext();
requestContext.put(ClientConstants.CLIENT_CONFIG , clientConfigElem);
Here are the XML file (client-policy.xml) contents. You have to create a file like this and make it available to your application CLASSPATH. It references the OWSM policy to be given to the requestContext Map.
<?xml version="1.0" encoding="UTF-8"?>
<policy-reference uri="oracle/wss11_saml_token_client_policy" category="security"/>


For SAML-based identity propagation, use any of the SAML client policies.

In this case, the policy retrieves the user name Principal from the Java Subject who is running the client and adds it to a generated SAML token in the SOAP call header.

3.1. Switching Identities

To propagate a new identity rather than the one executing the client, use a username token-based policy. By the way, the sample using oracle.webservices.WsMetaFactory may use oracle/wss_username_token_client_policy as the policy name in order to propagate the identity referred by the servlet1-key csf key.

Map<String,Object> requestContext = ((BindingProvider) port).getRequestContext(); 
requestContext.put(SecurityConstants.ClientConstants.WSS_CSF_KEY, "servlet1-key");


“servlet1-key” must match a key entry in the domain-level credential store that holds the username/password pair required for the use case implementation. Here’s how you create a key for OWSM usage in the credential store using wlst:

wls:/offline> connect()
Please enter your username :weblogic
Please enter your password :
Please enter your server URL [t3://localhost:7001] :t3://localhost:7101
Connecting to t3://localhost:7101 with userid weblogic
Successfully connected to Admin Server 'DefaultServer' that belongs to
domain 'DefaultDomain'.
Warning: An insecure protocol was used to connect to the
server. To ensure
on-the-wire security, the SSL port or
Admin port should be used instead.


In this case, the policy retrieves the username/password pair from the credential store and adds it to a generated username token in the outgoing SOAP header.

What if the client is a Java SE application?

So far so good, but what happens when the client runs on a Java SE environment?

a) How do you get a hold of the OWSM policy?

Add oracle_common/modules/oracle.wsm.policies_11.1.1/wsm-seed-policies.jar, available in your middleware home installation, to the client CLASSPATH. All policy files are in it.

b) How to deal with the credential store (for identity switching) ?

You need to supply the client with 2 files:

1 – cwallet.sso, which is a file-based credential store. In order to author cwallet.sso, I recommend that you use wlst’s createCred command. Yes, you’ll need a JRF-enabled Weblogic server to create it.

2 – jps-config.xml, through Java option. In jps-config.xml, make sure there’s a credential store service instance available for the default context pointing to the folder where cwallet.sso is located, as shown in this sample jps-config.xml:

<?xml version = '1.0' encoding = 'UTF-8'?>
<jpsConfig xmlns="" xmlns:xsi="" xsi:schemaLocation="">
<property value="doasprivileged" name=""/>
<serviceProvider class="" name="credstore.provider" type="CREDENTIAL_STORE">
<description>Credential Store Service Provider</description>
<serviceProvider class="" name="idstore.xml.provider" type="IDENTITY_STORE">
<description>XML-based IdStore Provider</description>
<serviceProvider class="" name="policystore.xml.provider" type="POLICY_STORE">
<description>XML-based PolicyStore Provider</description>
<serviceProvider class="" name="jaas.login.provider" type="LOGIN">
<description>Login Module Service Provider</description>
<serviceProvider class="" name="keystore.provider" type="KEY_STORE">
<description>PKI Based Keystore Provider</description>
<property value="owsm" name=""/>
<serviceInstance provider="credstore.provider" name="credstore">
<property value="./" name="location"/>
<serviceInstance provider="idstore.xml.provider" name="idstore.xml">
<property value="./jazn-data.xml" name="location"/>
<property value="" name=""/>
<serviceInstance provider="policystore.xml.provider" name="policystore.xml">
<property value="./jazn-data.xml" name="location"/>
<serviceInstance provider="jaas.login.provider" name="idstore.loginmodule">
<property value="" name="loginModuleClassName"/>
<property value="REQUIRED" name="jaas.login.controlFlag"/>
<property value="true" name="debug"/>
<property value="true" name="addAllRoles"/>
<property value="false" name="remove.anonymous.role"/>
<serviceInstance location="./default-keystore.jks" provider="keystore.provider" name="keystore">
<description>Default JPS Keystore Service</description>
<property value="JKS" name="keystore.type"/>
<property value="" name=""/>
<property value="keystore-csf-key" name="keystore.pass.csf.key"/>
<property value="sign-csf-key" name="keystore.sig.csf.key"/>
<property value="enc-csf-key" name="keystore.enc.csf.key"/>
<jpsContexts default="default">
<jpsContext name="default">
<serviceInstanceRef ref="credstore"/>
<serviceInstanceRef ref="idstore.xml"/>
<serviceInstanceRef ref="policystore.xml"/>
<serviceInstanceRef ref="idstore.loginmodule"/>
<serviceInstanceRef ref="keystore"/>

Hope this helps some of you dear readers out there.

Achieve Faster WebLogic Authentications with Faster Group Membership Lookups

In my last post  I wrote about the complicated and timely process of determining all of a user’s group memberships when an LDAP namespace includes nested and dynamic group memberships. I wrote about how you can simplify and speed up getting a user’s group memberships through the use of a dynamic “member of” attribute and specifically the orclMemberOf attribute in OID.

Today I’d like to extend this discussion to WebLogic server authentications.

A Review of LDAP Authenticators and Groups

As I’ve written about in the past, as part of the authentication process, LDAP authenticators do a search to determine what groups the user is a member of which in turn get used in determining the group memberships and roles for the JAAS subject and principals.

By default the WebLogic LDAP authenticators follow the long time consuming process I laid out in my last post for determining group memberships with nested groups. First, it searches all your groups to figure out which groups your user is directly a member of. Then for each of those groups, it searches all your groups again to see which of those groups your user is a member of.

It will continue to search your groups with the results of each subsequent search until you reach the configured maximum level of nested memberships that you want to pursue or all the searches come back empty.

Only it is actually quite a bit “worse” than that because for some reason when the authenticator finds a group within a group it doesn’t just use the DN of that group in the next search, it takes the name of that group based on the “group name attribute” setting in the authenticator and then does a search to find the group’s DN all over again. So, for every group found in a search of memberships for the user there will be 2 new LDAP searches performed. One to get the user’s DN again and one to get the groups that group is a member of.

In my post on tuning LDAP authenticators, I wrote about the importance of tuning the settings governing group membership searches in the authenticator and specifically about limiting the depth of the searches for nested group membership.

Speeding Things Up

Today, I’d like to cover how-to dramatically speed up this process by letting the directory do all the work for you. This is achieved by configuring the authenticator to take advantage of the dynamic ‘member of’ (orclMemberOf in the case of OID) attribute that I wrote about in my last post.

The setting that enables this behavior is in the Dynamic Groups section of the provider specific configuration for LDAP authenticators and is called User Dynamic Group DN Attribute. When configured the LDAP authenticator will skip all searches (for both direct and nested memberships) of dynamic groups. Instead it will add roles (group principals) to the user for every group returned by the LDAP directory (OID) in the value of the specified attribute.

Here is what you need to know about this setting:

1) When configured the authenticator will add roles (group principals) to the user for every group returned by the LDAP directory (OID) in the value of the specified attribute.
2) Despite the fact that the setting is part of the Dynamic Group section of the authenticator configuration, the authenticator will add roles for every group returned as part of the value of the attribute, regardless of whether that group is a static group or dynamic group.

3) That being said, the authenticator will still perform a search of memberships for all static groups even when the User Dynamic Group DN Attribute is defined. It will not however perform a membership search of dynamic groups; instead it assumes all dynamic group memberships are captured by the attribute value.

Note especially that the authenticator will still perform a full search of nested static groups even when User Dynamic Group DN Attribute is defined; even though the orclMemberOf attribute in OID includes static group memberships.
Putting It All Together
So, to dramatically improve your WebLogic authentication performance with nested groups I recommend that you configure your authenticators as follows:

1) Enter the appropriate LDAP attribute name for the value of User Dynamic Group DN Attribute based on the type of directory you are authenticating against.. Appropriate values include orclMemberOf for OID, memberof for DSEE, and ismemberof for AD.

2) Set the value of GroupMembershipSearching to limited. The default value is unlimited.

3) Set the value of Max Group Membership Search Level to 0. This will make the authenticator not perform searches for nested group memberships and limit it to performing a single search to find the users direct group memberships. Again, we will be relying on the value of the attribute specified in User Dynamic Group DN Attribute to give us the nested searches.

4) If you want to even eliminate the direct group membership search you can specify an empty Group Base DN. Note here that the Group Base DN must exist or you’ll get an error and a failed authentication. However, it can be empty. So, you can create cn=fakegroupbase as a sibling of cn=Groups,dc=example,dc=com.

5) If you recall in my previous post I mentioned that using the orclMemberOf attribute can result in duplicate listing when nested memberships are returned multiple times, once for each group that the user belongs to that is a member of another given group. Because of this, you’ll probably want to check the Ignore Duplicate Membership option in the authenticator.

Below is a screen shot of an OID authenticator configured with these options:

LibOVD: when and how

LibOVD, introduced in FMW, is a java library providing virtualization capabilities over LDAP authentication providers in Oracle Fusion Middleware. It is delivered as part of OPSS (Oracle Platform Security Services), who is available as part of the portability layer (also known as JRF – Java Required Files). In other words, if you are a JDeveloper, WebCenter, SOA or IAM customer, you already have libOVD.

LibOVD provides limited virtualization capabilities when compared to its big brother OVD (Oracle Virtual Directory), who is a full-blown server implementing the LDAP protocol with far more advanced virtualization features, including OOTB support for LDAP and database backend servers, advanced configuration for adapters, out-of-box plug-ins as well as a plug-in programming model allowing for almost limitless possibilities in transforming data and connecting to external data sources.

1. When


LibOVD is primarily designed to work as an embedded component for FMW components who need to look up users and groups across distinct identity providers. If you had a chance to look at this post, you already know the User/Role API can take into account only one authentication provider.

Take SOA’s Human Workflow component, for instance. Customers frequently have an external identity store, like OID or Active Directory, holding the application end users and related enterprise groups. But they also often want to keep Weblogic’s embedded LDAP server for administration accounts, like weblogic user.  Or they simply have an LDAP server in the US and another one in Brazil and want to bring all those users together. Using User/Role API alone is not enough.

That does not mean libOVD can be used only by FMW components. It is ok that your custom applications employ libOVD and that’s a given once you enable libOVD for a given domain. However, do not expect any of those features only available in OVD. A common mistake is expecting libOVD to work with a database authenticator. LibOVD is only for LDAP authenticators.

Another use case for libOVD is known as split profile, where information about the same identity exists in more than one LDAP-based identity store and your applications need a consolidated view. More information here:

2. How


LibOVD is activated when you set the property virtualize=true for the identity store provider in jps-config.xml:

<serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">
<property name="idstore.config.provider" value=""/>
<property name="CONNECTION_POOL_CLASS" value=""/>
<property name="virtualize" value="true"/>
<property name="OPTIMIZE_SEARCH" value="true"/>

It is possible to hand-edit jps-config.xml, but I recommend that you use Enterprise Manager to set it up, because the minimum mistake in jps-config.xml can get the WLS domain into a non-startable state.

Note: Unless your application makes explicit usage of a different JPS Context, this is domain-wide set up, impacting all applications deployed in that domain, both from authentication and user/group lookup perspectives.

Enabling libOVD using Enterprise Manager:

Navigate to the domain Security Provider Configuration screen and click the Configure button, as shown:


Then use the Add button to add the property.


When you enable the virtualize flag, a new folder structure along with some files are created under $DOMAIN_HOME/config/fmwconfig folder.


Notice the default folder name: it refers to the default JPS context in jps-config.xml, which in turn refers to the idstore service instance for which libOVD has been configured.

These are familiar files to readers used to OVD. adapters.os_xml, for instance, lists the configured adapters for libOVD. An adapter is created for each authenticator listed in Weblogic’s config.xml.

3. Be Aware


Within a single adapter, in order to search across both users and groups search bases, libOVD sets the adapter’s root to be the common base between them. For example, let’s say users and groups search bases are defined as cn=users,dc=us,dc=oracle,dc=com and cn=groups,dc=us,dc=oracle,dc=com, respectively. In this case, the adapter’s root is set to dc=us,dc=oracle,dc=com.

Such configuration may cause a performance overhead if customers happen to have more users and groups distributed across other nodes that are also children of the common root but who are not necessarily required by the applications running on Weblogic. Wasted LDAP queries.

Notice the undocumented property OPTIMIZE_SEARCH in the jps-config.xml snippet presented before. It is the solution to the problem just described because it forces libOVD to search only within the users and groups search bases defined in the authenticator providers. No searches are performed elsewhere.

In order to take advantage of OPTIMIZE_SEARCH, make sure to get the appropriate patch for the corresponding release of FMW:

  • FMW 14693648
  • FMW 14919780
  • FMW 14698340

4. Managing libOVD


libOVD can be managed via When connected, type

> help(‘OracleLibOVDConfig’)

and check the available commands.

5. Debugging libOVD


To debug libOVD, create a logger for oracle.ods.virtualization package in your server’s logging.xml file. Here’s a snippet.

<logger name=‘oracle.ods.virtualization’ level=‘TRACE:32’>

 <handler name='odl-handler'/>

My “enable debug logging in OAM” WLST script

I was on the phone with someone earlier today and mentioned in passing that I only need to run a simple script to turn debug logging on and off in my little test environment. The silence on the other end of the line told me either he didn’t believe me …

SSL offloading and WebLogic server redux – client x.509 certificates

I recently had to revisit the subject of SSL offloading and WebLogic server to include the ability to do client certificate authentication. I was specifically doing this for use with Oracle Access Manager 11g, but the configuration steps are identical …

Why do I need an Authenticator when I have an Identity Asserter?

Another common question on the internal mailing list: Why do we need an OID authenticator when I have the OAM Asserter enabled? The user has already been authenticated when the request gets to WebLogic. The short answer is that all an Identity Assert…

The “reassociation” business

Since Fusion Middleware, OPSS (Oracle Platform Security Services) support 3 types of security stores: file, OID (Oracle Internet Directory) and Oracle database. When a Weblogic server domain is first created, OPSS is “associated” to a file-based security store by default, which is ok for development purposes. But for production, that is not recommended (Please check Multiple Nodes Servers Environments section in OPSS docs). That would be ok if your whole environment is a single Weblogic domain with only one server in a single machine. But 99,99% of the cases are not like that. Usually, an SOA or WebCenter environment is composed of multiple servers in clusters spread across different machines. A file-based security store is not a scalable option. In these cases, you should look at OID or the database. Fusion Applications, a gigantic set of apps, adopt OID as the security store.

The OPSS security store is a composite of policies, credentials, keys and audit services. Notice that I am leaving the identity store service out. OPSS delegates the identity store service to the identity providers configured in WebLogic server.

As a side note, OPSS is not a product, but a set of security services used by Fusion Middleware. If you’re a Fusion Middleware user, trying to understand OPSS is a great idea.

This post is about the nitty-gritty details of configuring (or reassociating) a Weblogic server domain (or multiple domains) to a different type of security store. That’s where the term “reassociation” comes from.

The information presented here is a small subset, but complements and sometimes overlaps “Configuring OPSS Security Store” documentation (reading is strongly recommended).
Before going any further on reassociation, let me talk a bit about an important character: jps-config.xml.


This is the OPSS file that describes all its services. It is located through the – system property, which is set in script in a standard JRF (Java Required Files) domain. By the default, the property points to ${DOMAIN_HOME}/config/fmwconfig/jps-config.xml and it is defined in the variable EXTRA_JAVA_PROPERTIES. It is NOT a good idea to change it, since jps-config.xml holds several relative references to other files.

Whenever you create a BPM, SOA or WebCenter domain via the script, JRF template is automatically selected as a dependency.

That put, jps-config.xml is a domain-wide artifact. There’s no such thing as a server-level or application-level jps-config.xml. However, jps-config.xml provides the concept of contexts. They can be explored in case you want to hookup different applications to different services, but this would be a topic for another post.

When a reassociation operation is performed, configuration changes are made to jps-config.xml. In many cases, a corrupted jps-config.xml can bring your domain to a non startable state. Therefore, be very diligent and careful when making changes to it. Do NOT perform manual changes. Instead, use either Enterprise Manager or wlst.

The Policy Store

The policy store holds all security policies used by applications deployed on a Fusion Middleware instance. These include grants given to principals (users, groups, application roles) as well as to code.

For instance, if you look at the OOTB policy store of a BPM domain, you would see policies scoped into 4 applications (OracleBPMProcessRolesApp, OracleBPMComposerRolesApp, b2bui and soainfra) as well as a bunch of code-source policies (which are applicable to code in any application deployed in the domain).

In jps-config.xml, the “default” context defines the services that are by default 🙂 picked up by Fusion Middleware applications.

<jpsContext name="default">
 <serviceInstanceRef ref="credstore"/>
 <serviceInstanceRef ref="keystore"/>
 <serviceInstanceRef ref="policystore.xml"/>
 <serviceInstanceRef ref="audit"/>
 <serviceInstanceRef ref="idstore.ldap"/>
 <serviceInstanceRef ref="trust"/>
 <serviceInstanceRef ref="pdp.service"/>

If you follow policystore.xml up in the file, you should see a serviceInstance with that name:

<serviceInstance name="policystore.xml" 
   <description>File Based Policy Store Service Instance</description>

which brings us to the OOTB file-based policy store, system-jazn-data.xml. Down in this post, we’re going to change this.

The Credential Store


The credential store securely holds credentials to be used by Fusion Middleware applications when connecting to other systems. OWSM agents, for instance, use the credential store service when a WSS username token needs to be attached to an outgoing SOAP message. Another heavy user is ADF (Application Development Framework), who uses it to store credentials required to connect to external systems. OOTB, the credential store is materialized as the cwallet.sso file pointed by the credstore serviceInstance (notice the file name itself is not specified):

<serviceInstance location="./" provider="credstoressp" name="credstore">
   <description>File Based Credential Store Service Instance</description>

Notice this credential store is not the same as the bootstrap credential store, described in “Bootstrap cwallet.sso” section down below.

Reassociating to OID


There are two options to do reassociation: Enterprise Manager and wlst. They are very well covered in the OPSS documentation, but let me still explore them here a little bit more. I hope to add some details that are not very clear in the docs.

First and foremost, make sure to satisfy a requirement: create a root node in the LDAP server that is going to hold our security store tree. Forgetting this is a very common mistake.

Enterprise Manager

Navigate to the drop-down menu for the Weblogic Domain and choose Security –> Security Provider Configuration


Notice that both the policy store and the credential store are of the same store type: file. That tells us something: the OPSS Security store can only be persisted in one physical store type for production. Persisting credentials to a different store from keys or policies is not supported or recommended.

By clicking “Change Store Type” button, you get:


The LDAP Server Details properties are straightforward. Those under Root Node Details deserves some comments.

Root DN: the node you’ve manually created in OID before.

Create New Domain: no, this is not going to create a new Weblogic domain. It determines whether or not the new security store (OID) is going to be bootstrapped with data from the source security store (in this case, the file-based policy, credential and keystore). Unchecking the box is relevant when you want more than one Weblogic domain sharing the security store. This flag corresponds to the join parameter in the wlst command, shown below.

Domain Name: Enterprise Manager uses the Weblogic domain name as a convenience, but it can actually be any arbitrary string. This name is going to manifest itself as a container node, under which OPSS security data are migrated. Several Weblogic domains can bind to the same container, but a single Weblogic domain cannot bind to different containers. I am talking about this kind of deployment:


Within OID, cn=SecurityStore is the Root DN. cn=JPSContext is implicitly created upon the first reassociation.

Notice there’s no reference to a particular Weblogic server. That essentially means that you bind one domain as a whole to a container node. Within a container, specific applications bind to application  stripes (nodes cn=AppsA, cn=AppsB, cn=AppsC, cn=AppsD).  Although not shown in the picture, any number of applications can bind to a given stripe, but the same application can bind to one and only one stripe.


Now that we understand the behavior of those reassociation properties, the wlst command is straightforward. It is an online command, which means you must connect to the Admin Server to execute it.

> reassociateSecurityStore(domain="farm1", admin="cn=orcladmin", 
password="welcome1", ldapurl=ldap://localhost:3060, servertype="OID",



domain corresponds to the policy container that the WebLogic domain will bind to and it does NOT need to be named as the Weblogic domain. Notice the format: you do NOT prepend “cn=” to the value.

servertype is the security store type. Supported values are “OID” and “DB_Oracle”.

jpsroot corresponds to the Root DN node in OID, that, again, has to be manually created upfront.

join is optional, but of uttermost importance. It corresponds to the “Create New Domain” checkbox in Enterprise Manager. The default value (if unspecified) is “false”, which means a container node specified by the domain parameter is going to be created in OID and the security artifacts (policies, credentials, keys) migrated. As a pre-requisite, OPSS first automatically seeds an LDAP schema in OID. If there’s a container with the same name already created, you’re likely going to be presented with an error. If join is set to “true”, any security artifacts in the Weblogic domain are left alone and not migrated to the security store. 

Details on admin and password parameters in the “Bootstrap cwallet.sso” section below.

Bootstrap cwallet.sso

When “reassociated” to OID, Weblogic needs to know which credentials to use when connecting to the server. Such credentials are, by default, kept in the location that is pointed by the following jpsContext in jps-config.xml:

<jpsContext name="bootstrap_credstore_context">
<serviceInstanceRef ref="bootstrap.credstore"/>


Looking up the file, you find:

<serviceInstance location="./bootstrap" 
<property value="./bootstrap" name="location"/>


That implicitly means cwallet.sso file in ./bootstrap folder. This is an encrypted file which only authorized code is allowed access.

Important note: make sure the used credential has permissions to write to OID if you expect to allow changes to your policies through the Weblogic domain. That’s true, I’d say, always, or in 99.99% of the cases.

What if you need to change these credentials later? wlst command to the rescue (in wlst, type help(‘opss’) for a list of OPSS-related commands. You will see modifyBootStrapCredential):

> modifyBootStrapCredential(jpsConfigFile='<filepath>',username='<username>', password='<password>')



jpsConfigFile = path of the valid jps config file from which the context is read
username = distinguished name of the user.
password = the password to be reset.


> modifyBootStrapCredential(jpsConfigFile='/opt/wls/oracle/middleware/user_projects/domains/soa_domain/config/fmwconfig/jps-config.xml',username='cn=orcladmin', password='welcome1')


 Note: modifyBootStrapCredential is an offline command.

Reassociating to Oracle database

My colleague Kavitha Srinivasan already describes the process along with some benign error messages when reassociating to an Oracle database. I will add a link to her post once she makes it public. Here I simply want to mention that the same principles discussed in the OID section apply.

For the reader convenience, I repeat here the 2 pre-requisites for Oracle database reassociation:

1) An OPSS schema needs to be created in the database. This is done using the RCU (Repository Creation Utility) tool. Here’s the RCU screenshot (notice the Metadata Services scheme is automatically selected as a dependency once you select Oracle Platform Security Services):


2) A data source needs to be created in Weblogic. It must be non-XA with support for global transactions disabled. Thanks Kavitha for such an important detail.

Enterprise Manager

Follow the same path as in the OID reassociation up to this screen:


Notice the Root DN parameter value. Differently than the OID case, it does NOT need to be previously created and it also does NOT need to follow an LDAP DN format (starting with “cn=”). This is just a convention.

Here’s the typical output of a successful reassociation:



Please check reassociateSecurityStore command.

For your convenience, here’s an example:

> reassociateSecurityStore(domain="farm2", servertype="DB_ORACLE", datasourcename="jndi/OPSS_DS", jpsroot="cn=SecurityStore",[admin="adminAccnt"], [password="passWord"],[join="trueOrfalse"])


Inform admin and password parameters only if your data source itself is protected. Their values are going to be stored in the bootstrap credential store. Notice they do NOT correspond to the database credentials used by the data source (these are actually defined in the data source itself in Weblogic). I guess the join parameter is understood by now. If not, take a look at the “Reassociating to OID” section above. 

After Reassociation

This is how jps-config.xml looks like after reassociation to Oracle DB:

<jpsContext name="default">
<serviceInstanceRef ref="credstore.db"/>
<serviceInstanceRef ref="keystore.db"/>
<serviceInstanceRef ref="policystore.db"/>
<serviceInstanceRef ref="audit"/>
<serviceInstanceRef ref="idstore.ldap"/>
<serviceInstanceRef ref="trust"/>
<serviceInstanceRef ref="pdp.service"/>


Look up in jps-config.xml for credstore.db, policystore.db and keystore.db serviceInstances. They all refer to the database via the props.db.1 property. For example, policystore.db is like:

<serviceInstance provider="policystore.provider" name="policystore.db">
<property value="DB_ORACLE" name="policystore.type"/>
<propertySetRef ref="props.db.1"/>
 And props.db.1:
<propertySet name="props.db.1">
<property value="cn=soa_domain" name=""/>
<property value="DB_ORACLE" name="server.type"/>
<property value="cn=policystore" name=""/>
<property value="jndi/OPSS_DS" name=""/>
 Does the value of (jndi/OPSS_DS) look familiar?
Important Note: You may be tempted to get rid of some files after reassociation, like system-jazn-data.xml, for example. Not a good idea. It’s true that it is completely out of the runtime picture, but there are some implications to the JMX framework used by Enteprise Manager and you may want it later in case you need to revert your security store to file again.

Reassociating to file

As you may have noticed, “file” is not a supported value for the servertype parameter in the reassociateSecurityStore command. Indeed, the only way to switch back to file-based security store is changing jps-config.xml manually, which is NOT a good idea. Therefore, the best thing you can do is backing up jps-config.xml before running any reassociation, so you can revert it back later on if needed.

What if you made changes to the security store (creating new policies, for example) in OID or DB mode and want those policies in the file-based policy store? Check wlst’s migrateSecurityStore command (help(‘migrateSecurityStore’)), which is actually a good topic for a future post.

Enjoy your reassociations! 🙂

Virtual Users in OIF, Weblogic and OWSM

One of the main strengths of SAML is the ability to communicate identity information across security domains that do not necessarily share the same user base. In other words, the authenticated user in one security domain does not necessarily exist in the target security domain providing the service.

Such concept is supported in all major Oracle products that consume SAML tokens: OIF, Weblogic Server and OWSM. The sole purpose of this post is to show how to configure it in these products. Setting up SAML services as a whole involves more than what’s showed here and I recommend the official product documentation for detailed steps.

I hope this can be helpful to someone out there.

OIF (Oracle Identity Federation)

OIF enables federated single sign on for users behind a web browser.

It calls the aforementioned concept “Transient Federation” and enables it via a checkbox (that should be unchecked) in Enterprise Manager OIF’s Console. Notice it also supports the concept of a “Mapped Federation”, where the incoming identity is mapped to some generic user in the local identity store. But here I am talking about the case where there’s no mapping. The user in the SAML assertion is simply trusted.

In order to enable a Transient Federation in OIF, simply make sure “Map Assertion to User Account” checkbox is unchecked in the Service Provider Common tab.


Weblogic Server

Weblogic server provides SAML services that can be leveraged by Web SSO as well web services.
Weblogic calls the concept Virtual Users and implements it in its SAML2IdentityAsserter along with the SAMLAuthenticator.

First, you need to enable your server as a SAML Service Provider. Notice this is done at the server level. Go to Environment –> servers –> <Pick server from list> to get into the screen below:


Then add a SAML2IdentityAsserter to the authentication providers list and add an Identity Provider (who does not need to be another Weblogic server) Partner to SAML2IdentityAsserter. Notice that you can add either a Web SSO partner provider or a Web service partner provider. In the case of Web SSO, Weblogic Console will ask you for the partner metadata file.


In SAML2IdentityAsserter’s Management tab, click the Identity Provider partner link just created and check the “Virtual User” check box:


You also need to add a SAMLAuthenticator provider after the SAML2IdentityAsserter and set its control flag to SUFFICIENT. Also make sure to set the control flag of subsequent authentication providers to SUFFICIENT.


End result is that the SAMLAuthenticator will instantiate a Java Subject populated with a user principal ( from the SAML Subject asserted by SAML2IdentityProvider.

OWSM (Oracle Web Services Manager)

OWSM protects SOAP web services via agents connected to web services providers as well as web services clients. The agent behavior is determined by the policies that get attached to the provider and the client. A client policy typically adds a token to the outgoing SOAP message while the server policy processes it, usually by authenticating and/or authorizing the user (in the case of a security policy).

First of all, a SAML-based security policy needs to be attached to the web service provider. The policy will at some point try to authenticate the subject in the incoming SAML assertion.

OWSM delegates authentication to OPSS (Oracle Platform Security Services). When asserting the SAML Subject to the container, OWSM leverages the SAML Login Module, defined in jps-config.xml and configured via EM (Enterprise Manager).

In other to enable virtual users in this scenario, set the property to true for the SAML (or SAML2) Login Module. In EM, click the Weblogic Domain drop-down menu, pick Security –> Security Provider Configuration, click the login module row and then the Edit button. Scroll down to the bottom of the page and the add the property mentioned above to the list of Custom Properties.


In order to propagate the change, restart the Admin server as well as the managed server running the web service.

Once this is done, whether or nor the SAML Subject exists in the identity store used by OPSS is irrelevant. It is going to be asserted and a Java Subject containing a user principal is going to be instantiated in the container.

5 minutes or less: User/Role API and SSL

This short post follows up Couple of things you need to know about the User/Role API. Now imagine that your LDAP identity provider is SSL enabled in 1-way mode (the server authenticates to the client, but the client does not authenticate to the server).

Now you need to tell Weblogic server how to validate the LDAP server certificate. And this is accomplished by adding the LDAP server CA certificate to the configured Weblogic trust store. If we’re talking about a self-signed certificate, simply add the certificate itself to the trust store. And there are a couple of options for the trust key store: Command Line, Custom Trust, Java Standard Trust or the OOTB Demo Trust. So far, so good. By adding the certificate to one of these options, Weblogic is all good to talk to the identity provider in SSL mode.

However, the User/Role API is not directly tied to Weblogic, so don’t expect it to take whatever is configured for the server. By default, as a standard Java-based client, the User/Role API looks for the standard Java $JDK_HOME/jre/lib/security/cacerts file, unless you tell it to look elsewhere, by informing the java system properties<path_to_trust_store_file><trust_store_password>

Relying on the original cacerts file may be dangerous in case you upgrade your JDK. If you need to leverage the existing certificates there, make a copy of the file and use the copy. Then simply tell the User/Role API where to read it from using the properties mentioned above.

SSL/TLS fun with client certificate authentication

Today’s adventure was a question about limiting SSL connections.

The customer’s problem statement was along the lines of

I want to make SSL connections, using client certificate auth (i.e. two-way ssl) and have the server accept only one certificate. Can I put the one cert I want in the “trusted CA list” to do that?

I thought the answer was “no, that’s not how you do that” but I wasn’t 100% sure so off I went on a research project…

I checked the relevant RFC (2246 for TLS) and found this in section 7.4.4 which talks about the server asking the client for a certificate:

A list of the distinguished names of acceptable certificate authorities. These distinguished names may specify a desired distinguished name for a root CA or for a subordinate CA; thus, this message can be used both to describe known roots and a desired authorization space.

I think that means that the cert the user uses has to be issued by one of the certificate authorities in that list, not that the cert is in that list. So I was pretty sure that the answer to their question was “no”, but I wanted to be absolutely sure, so that means testing.

First up – generating certificates. I used my dead simple CA script to generate the certificate for the host and another certificate I’ll use as the client certificate:

$ ./
$ ./ tester

I had Apache and mod_ssl already installed (use yum to install mod_ssl and you’ll get both installed and configured).

Then I set Apache to use the ca’s certificate as the trust store and to demand a certificate from the client:

# Certificate Authority (CA):
# Set the CA certificate verification path where to find CA
# certificates for client authentication or alternatively one
# huge file containing all of them (file must be PEM encoded)
#SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
SSLCACertificateFile /home/ec2-user/ca/ca.crt

SSLVerifyClient require

Then I tested with openssl directly and was able to connect without a problem:

[ec2-user@ssltest ~]$ openssl s_client -CAfile ~/ca/ca.crt -cert ~/ca/tester.crt -key ~/ca/tester.key -connect
depth=1 C = US, ST = Massachusetts, L = Boston, O = Oracle, OU = A-Team, CN = My Cert Authority, emailAddress =
verify return:1
depth=0 C = US, ST = Massachusetts, L = Boston, O = Oracle, OU = A-Team, CN =, emailAddress =
verify return:1
Acceptable client certificate CA names
/C=US/ST=Massachusetts/L=Boston/O=Oracle/OU=A-Team/CN=My Cert Authority/

Then I tried swapping the SSLCACertificateFile to just be “tester”‘s certificate:

SSLCACertificateFile /home/ec2-user/ca/tester.crt

and ran the same command again:

[ec2-user@ssltest ~]$ openssl s_client -CAfile ~/ca/ca.crt -cert ~/ca/tester.crt -key ~/ca/tester.key -connect
depth=1 C = US, ST = Massachusetts, L = Boston, O = Oracle, OU = A-Team, CN = My Cert Authority, emailAddress =
verify return:1
depth=0 C = US, ST = Massachusetts, L = Boston, O = Oracle, OU = A-Team, CN =, emailAddress =
verify return:1
3079509740:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:s3_pkt.c:1193:SSL alert number 48
3079509740:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:184:
Acceptable client certificate CA names /C=US/ST=Massachusetts/L=Boston/O=Oracle/OU=A-Team/CN=tester/

and the openssl client dropped the connection.

So openssl either doesn’t know what to do when the “Acceptable client certificate CA names” matches the certificate or that’s simply not allowed.

I also tested with a little perl script:


use LWP::UserAgent;


$ENV{HTTPS_CA_FILE} = "$HOME/ca/ca.crt";

$ENV{HTTPS_PKCS12_FILE} = "$HOME/ca/tester.p12";

$ua = LWP::UserAgent->new(ssl_opts => { verify_hostname => 1 });
$response = $ua->get("");

if ($response->is_success) {
print $response->content;
else {
print STDERR $response->status_line, "\n";

So how are you supposed to do that?

If you’re using Apache then you just configure Apache to accept only the one certificate you want. This from the sample httpd.conf (actually ssl.conf, but that’s included in httpd.conf):

# Access Control:
# With SSLRequire you can do per-directory access control based
# on arbitrary complex boolean expressions containing server
# variable checks and other lookup directives. The syntax is a
# mixture between C and Perl. See the mod_ssl documentation
# for more details.
#SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
# and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
# and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
# and %{TIME_WDAY} >= 1 and %{TIME_WDAY} # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/

Updated dead simple certificate authority

Back in April I posted a shell script I wrote to implement a dead simple Certificate Authority for testing purposes. I recently revisited that script because I needed JKS files in addition to the PEM format files it created.

Without further ado my new and improved script is available right after the break.


# a very simple cert authority
# now with JKS support!
# Copyright 2011 Oracle

# License agreement:
# ------------------
# This script is intended as a simple sample and/or for my own
# purposes. If you get any benefit from it then that's GREAT but there
# are NO warranties and NO support. If this script burns down your
# house, chases your dog away, kills your houseplants and spoils your
# milk please don't say I didn't warn you.

# You should probably use a better passphrase than this

baseAnswers() {
echo US
echo Massachusetts
echo Boston
echo Oracle
echo A-Team
echo $1
echo root@`hostname`

answers() {
baseAnswers $1
echo ''
echo ''

# No need to edit past here

createderfiles() {
echo Creating .der files for $1

openssl pkcs8 -topk8 -nocrypt -in $1.key -inform PEM -out $1.key.der -outform DER
openssl x509 -in $1.crt -inform PEM -out $1.crt.der -outform DER

# better safe than sorry
umask 077

# these next two lines figure out where the script is on disk
SCRIPTPATH=`readlink -f $0`
SCRIPTDIR=`dirname $0`

# find keytool in our path
KEYTOOL=`which keytool`
if [ "$KEYTOOL" == "" ] ; then
echo "keytool command not found. Update your path if you want this"
echo "tool to create JKS files as well as PEM format files"

# if you're running this from somewhere else...
if [ $SCRIPTDIR != "." -a $SCRIPTDIR != $PWD ]; then
# then CD to that directory
if [ "$KEYTOOL" == "" ]; then
echo "Output files (.crt, .key) will be placed in $PWD"
echo "Output files (.crt, .key, .der and .jks) will be placed in $PWD"

if [ ! -e ca.crt -o ! -e ca.key ]; then
rm -f ca.crt ca.key ca.p12
echo "Creating cert authority key & certificate"
baseAnswers "My Cert Authority" | openssl req -newkey rsa:1024 -keyout ca.key -nodes -x509 -days 365 -out ca.crt 2> /dev/null

if [ "$KEYTOOL" != "" ] ; then
if [ ! -e ca.crt.der -o ! -e ca.key.der -o ! -e ca.jks ]; then
# convert to der
createderfiles ca

# we actually don't need/want the ca.key.der but there's no
# harm in leaving it around since the .key file is here anyway

echo "Creating ca.jks"
# import the CA certificate into the JKS file marking it as trusted
keytool -import -noprompt -trustcacerts \
-alias ca \
-file ca.crt.der \
-keystore ca.jks \
-storetype JKS \
-storepass $PASSPHRASE \
2> /dev/null

if [ $# -eq 0 ] ; then
echo "This script creates one or more certificates."
echo "Provide one or more certificate CNs on the command line."
echo "Usage: `basename $0` <certcn> [certcn [...]]"
exit -1

for certCN in $@ ; do
echo Certificate for CN \"$certCN\"
echo =============================================


# files we can delete later

if [ -e $KEY ] ; then
echo " ERROR: Key file $KEY already exists"
if [ -e $REQ ] ; then
echo " ERROR: Request file $REQ already exists"
if [ -e $CRT ] ; then
echo " ERROR: Certificate file $CRT already exists"

if [ $ABORT -eq 1 ] ; then
echo ''
echo "If you wish to recreate a certificate for you must delete"
echo "any preexisting files for that CN before running this script."
echo ''
echo ''
answers $certCN | openssl req -newkey rsa:1024 -keyout $KEY -nodes -days 365 -out $REQ 2> /dev/null

# at this point we have a key file, but the cert is not signed by the CA
openssl x509 -req -in $REQ -out $CRT -days 365 -CA ca.crt -CAkey ca.key -CAcreateserial -CAserial ca.serial -days 365 2> /dev/null

# We now have a req, key and crt files which contain PEM format
# x.509 certificate request
# x.509 private key
# x.509 certificate
# respectively.

echo "Certificate created."
ls -l $KEY $REQ $CRT

echo 'Certificate information:'
openssl x509 -in $CRT -noout -issuer -subject -serial

# generate a pkcs12 file
openssl pkcs12 -export -in $CRT -inkey $KEY -certfile ca.crt -name $certCN -out $P12 -password pass:$PASSPHRASE -nodes

echo P12 info:
ls -l $P12
#openssl pkcs12 -in $P12 -info -password pass:$PASSPHRASE -passin pass:$PASSPHRASE -nodes

# if we have keytool we also need to create a jks file
if [ "$KEYTOOL" != "" ] ; then
echo "Will create JKS file as well..."
createderfiles $certCN

echo "Creating $JKS"
# step 1: copy the CA keystore into the new one
cp ca.jks $JKS

# step 2: take the pkcs12 file and import it right into a JKS
keytool -importkeystore \
-deststorepass $PASSPHRASE \
-destkeypass $PASSPHRASE \
-destkeystore $JKS \
-srckeystore $P12 \
-srcstoretype PKCS12 \
-srcstorepass $PASSPHRASE \
-alias $certCN

ls -l $JKS

keytool -list -keystore $JKS -storepass $PASSPHRASE



To use just make a new directory and put the above into a script in that directory. I usually mkdir ~oracle/simpleca and put the script in Then just run


How to reset your WLS super user password

Occasionally, we get into situations where we do not have the Weblogic super user (usually username = weblogic) password.  For myself, this sometimes happens when I’m using a VM that someone else created where they didn’t properly document all the…

Couple of things you need to know about the User/Role API

The idea of the User/Role API is to abstract developers from the identity store where users and groups are kept. A developer can basically interact with any identity provider supported by Weblogic server using the same methods. The javadoc can be found here:

In this post I want to alert you about two caveats:

1) User/Role API is able to query data from only one provider. If you want to query multiple identity stores, you need to go through an OVD Authenticator (or libOvd). And depending on how you get a handle to the identity store, the order in which providers are defined in Weblogic server Console as well as their CONTROL FLAGs do matter.

Shamelessly borrowing content from FMW Application Security Guide:

“OPSS initializes the identity store service with the LDAP authenticator chosen from the list of configured LDAP authenticators according to the following algorithm:

  1. Consider the subset of LDAP authenticators configured. Note that, since the context is assumed to contain at least one LDAP authenticator, this subset is not empty.
  2. Within that subset, consider those that have set the maximum flag. The flag ordering used to compute this subset is the following:

    Again, this subset (of LDAPs realizing the maximum flag) is not empty.

  3. Within that subset, consider the first configured in the context.

    The LDAP authenticator singled out in step 3 is the one chosen to initialize the identity store service.”

Lack of such understanding is a big source of headache.

Weblogic server ships with DefaultAuthenticator as the out-of-box authentication provider with the CONTROL FLAG set as REQUIRED.  Customers typically want to retrieve users from an enterprise-wide LDAP server, like OID or Active Directory. They go ahead and define a new authenticator and put it as the first in the providers list. But they leave DefaultAuthenticator untouched, because they still want to leverage the weblogic user as the administrator. And when some application relying on the User/Role API is executed (Oracle’s BPM and BIP are examples), a problem is just about to happen, because none of the users and groups defined in the enterprise-wide identity store are found. And the solution to this is pretty simple: switch DefaultAuthenticator’s CONTROL FLAG from REQUIRED to SUFFICIENT. What happens now during authentication time is that if the user is not found in the first authenticator, the lookup falls back to DefaultAuthenticator, so leveraging weblogic user is not a problem. And that will also make the User Role API querying the identity provider that you want (the first in the list).

2) Depending on how you get a handle to the identity store, provider-specific metadata (user, password, address, root search base) won’t be reused and you’ll be forced to define it in code again (of course you can externalize them to some properties file, but it is still a double maintenance duty).

That said, let’s examine possible ways of getting a handle to the identity store.

IdentityStoreFactoryBuilder builder = new IdentityStoreFactoryBuilder();
IdentityStoreFactory oidFactory = null;
Hashtable factEnv = new Hashtable();
// Creating the factory instance
factEnv.put(OIDIdentityStoreFactory.ST_SECURITY_PRINCIPAL, “cn=orcladmin”);
oidFactory = builder.getIdentityStoreFactory(“
OIDIdentityStoreFactory”, factEnv);
Hashtable storeEnv = new Hashtable();
IdentityStore oidStore = oidFactory.getIdentityStoreInstance(storeEnv);
// Use oidStore to perform various operations against the provider

Look at how specific this snippet is to OID and how we’re passing metadata that is already available in the provider definition itself. By doing this, you do not incur in the problem described in my bullet #1, because you’re going directly against a specific identity store. You’re not leveraging the definitions in Weblogic server at all.

But if you do this…

JpsContextFactory ctxFactory = JpsContextFactory.getContextFactory();
JpsContext ctx = ctxFactory.getContext();
LdapIdentityStore idstoreService = (LdapIdentityStore)ctx.getServiceInstance(IdentityStoreService.class)
IdentityStore idStore = idstoreService.getIdmStore();

// Use idStore to perform various operations against the provider

you’re delegating the provider lookup process to OPSS (Oracle Platform Security Services), and it will abide by those rules outlined in my bullet #1. Here, you don’t have to redefine your connection metadata. You are simply reusing whatever is defined in Weblogic server and are not incurring in the problem mentioned in bullet #2. For consistency and manageability, this is a much better approach.

For the curious, the following is the necessary configuration in jps-config.xml to make this happen (see text in bold red). It is out-of-box available in any FMW install, so don’t worry about it.


<serviceInstance name=”idstore.ldap” provider=”idstore.ldap.provider”>
     <property name=”idstore.config.provider” value=””/>
     <property name=”CONNECTION_POOL_CLASS” value=””/>


<jpsContexts default=”default”>
        <jpsContext name=”default”>
            <serviceInstanceRef ref=”credstore”/>
            <serviceInstanceRef ref=”keystore”/> 
            <serviceInstanceRef ref=”policystore.xml”/>
            <serviceInstanceRef ref=”audit”/>
            <serviceInstanceRef ref=”idstore.ldap”/>
            <serviceInstanceRef ref=”trust”/>            
   <serviceInstanceRef ref=”pdp.service”/>


Enterprise Gateway (OEG) External Service Calls

I’ve had the opportunity recently to work with the Oracle Enterprise Gateway (OEG) for a DoD opportunity.For those that aren’t familiar, OEG is an OEM from Vordel.The definitive blog on Vordel is at where our old friend Josh Bregman writes.There were a couple of patterns that emerged in my work that I wanted to post.

One pattern is the need to make an external call to a service.In my case, I needed to call to an attribute sharing service (See Chris’ blog on XASP for more details on one approach for this) and a XACML PDP.Note, OEG has an embedded PDP solution using Oracle Entitlements Server (OES) that provides a faster service, but in my case, I had to stay with the standards-based solution.This is very easy to accomplish with OEG with a 3-step circuit:







The Set Message defines the parameters of the request.In my case, I have an attribute service that takes a user DN and returns specified attributes.

<SOAP-ENV:Envelope xmlns:SOAP-ENV=””>


<orafed-arxs:AttributeRequest xmlns:orafed-arxs=”″ TargetIDP=”SpaceFenceIDP”>


<orafed-arxs:Attribute Name=”mail”/>

<orafed-arxs:Attribute Name=”clearance”/>




Notice the wildcards with ${variable}.These were attained earlier in the circuit with a “Retrieve from Directory Server” node after authentication to the Gateway.In the Policy Editor, create a policy and drag the Set Message onto the easel.Enter “text/xml” for the Content-type and optimally, import the request from a file, then save.








Setting the URL is very straightforward, just enter the URL and any trust certificates if necessary.

The response from the attribute service (Oracle Identity Federation in this case) is:

<SOAP-ENV:Envelope xmlns:SOAP-ENV=””>


<orafed-arxs:AttributeResponse CacheFor=”1499″ xmlns:orafed-arxs=”″>


<orafed-arxs:Subject>cn=Jane Wilson,ou=CDC,dc=service,dc=mil</orafed-arxs:Subject>

<orafed-arxs:Attribute Name=”mail”>



<orafed-arxs:Attribute Name=”businessCategory”>






Knowing this response format will help in parsing the response in OEG.When editing “Retrieve from Message”, re-name the node appropriately and select “Add” under the attribute location.





Name the attribute (arbitrary) and select magic wand button.Browse to the response file saved on disk, and you should see the contents in the XPATH Wizard.Select the node which you wish to have returned to the gateway.









Select “Use this path” and the XPath Expression should show up in the XPath field.Select OK.Name the attribute you want to populate in the gateway and save the node.

Debugging on OEG typically consists of adding a “Trace” node to your circuit and putting the listener in DEBUG or DATA mode.This gives you the “System.out” visibility into what’s going on in the Gateway.

Thanks to Dave Roberts from Vordel for getting me over the humps and for stealing second in 2004.