Mass Reset Password -part2 – using OIM Apis

Introduction Back in November, I wrote a blog about Mass Rest Password using OID. As mentioned there, and expected for this month, Oracle is now providing the same password change feature, but now using Java OIM API. Main Article In this case, for develoment and test environments customers usually want something that they can control […]

Mass Reset Password-part1 OID

Introduction One of the great features that customers need to be aware of and it could be used, as post-process, on many different situations such as: P2T, T2P and clone is the ability to reset multiple passwords simultaneously. Imagine the customer is scaling out their environment because they need an additional UAT environment. This customer […]

IDM FA Integration flows

Introduction One of the key aspects of Fusion Applications operations is the Users and Roles management. Fusion Applications uses the Oracle Identity management for its Identity store and policy store by default.This article explains how user and roles flows work from different poin of views, using ‘key’ IDM products for each flow in detail. With […]

The Resilient Enterprise – Disaster Recovery Planning

Introduction Organizations strive to operate their business in a continuous manner with the goal of protecting their revenue, brand, and data assets. Every organization, throughout its existence, is prone to failures originating from within or without. The propensity for failures exists. A resilient organization however does two things really well. First, it implements measures to […]

How Many ODI Master Repositories Should We Have?

Introduction A question that often comes up is how many Master Repositories should be created for a proper ODI architecture. The short answer is that this will depend on the network configuration. This article will elaborate on this very simplistic answer and will help define best practices for the architecture of ODI repositories in a […]

Protecting Intranet and Extranet Applications with a Single OAM 11g Deployment

I frequently get asked how to setup a single OAM deployment to protect both intranet and extranet apps. Today I’d like to explore the issues and solutions around such a setup.

This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

The Requirement

So the specific request is how to set up OAM so that it can protect intranet applications that can be accessed only from within the corporate network and extranet applications that can be accessed from the public internet.

The “Problem”

The issue (or apparent issue) that people see with creating such a solution is that OAM 11g is configured with a single “front end host” configured through a global setting that governs what protocol (HTTP/HTTPS), hostname, and port are used when users must be redirected to the OAM managed server.

If you go back and look at Chris’s post on the OAM 11g login flow and cookies then you’ll see that there are points in the flow where the user is redirected to the OAM managed server. This redirect can be to a load balancer or HTTP server fronting a managed server or cluster of OAM managed servers. So, OAM allows you through the “load balancer” settings to specify the interface that is fronting the OAM manage server(s).

The issue is that there is only a single global setting for defining the front end host for OAM. So people ask how can we protect intranet and extranet applications with one OAM deployment?

There are in fact two solutions to meeting this requirement.

An Extranet Front End Host for OAM

The first solution is to simply specify an extranet (externally accessible) host as the front end host for OAM. This way both intranet users accessing internal applications and extranet users accessing external applications from the internet can access the OAM server through the same front end host.

OAM Deployment with External Front End Host

This architecture is the recommended reference architecture for supporting this requirement. It is also the simplest solution to implement. The idea that you need to wrap your head around here is that even though your application may be accessible only from within the corporate network, that does not mean that your users cannot be sent at times (during login and initial SSO) to an externally accessible web server.

I do sometimes hear objections to this solution from customers who say that they want to keep their intranet users on the internal network. I have to admit that I don’t fully understand this objection. When I inquire as to what specifically they are concerned about I usually get a fairly vague response. In addition, it should be noted that on many if not most corporate networks, the user wouldn’t be sent all the way out to the internet to access the OAM front end host but rather just to the DMZ.

That being said, for customers that really want to keep intranet users on the internal network there is another solution.

Split DNS

I found a really good article by Thomas Schinder that explains split DNS. You can read it here.

The basic idea specific to an OAM deployment is that you setup your DNS so that your front end host for OAM as defined in the load balancer setting (say sso.mycompany.com) resolves to an externally accessible load balancer or web server when users query the host from the public internet but resolves to an internal load balancer or web server when users query the host from the internal network.

OAM Deployment with Split DNS

Split DNS works really well with OAM and is a completely viable option for enabling OAM to protect both extranet and intranet solutions simultaneously.

Identity Stores

There is a second unrelated issue that sometimes faces people wanting to do a single OAM deployment for external and internal applications and that is how their users are stored.

There are many variations on identity store requirements for hybrid internal/external deployments and I will save this discussion for another day but I wanted to mention it now as something to consider if you are planning such a deployment.

Domain Architecture and Middleware Homes Revisited

Over a year ago I wrote a couple important posts about the domain architectures used in Oracle Identity Management deployments.  You can find these posts here and here.
These posts have been very popular.  I’ve received lots of positive feedback on them but also a fair number of questions.  So, I thought that it would be worth revisiting the topic now.

 Let’s Review My First Post

So, the central premise of my first post was that it is a good idea to install products from different distribution packages into different WebLogic domains.  Specifically, I am recommending that customers install the IAM package (OAM,OIM,OAAM) into a separate domain from the IDM package (OID,OVD,OIF).   Same thing goes with products from either package and WebCenter or SOA suite. 
The reason is that it introduces the risk that different components (libraries, jars, utilities), including custom plug-ins, from the different packages that you are installing in the domain might be incompatible and cause problems across your entire domain.
Beyond the risk of such an incompatibility, you also must consider how installing multiple packages into one domain reduces your flexibility to apply patches and upgrades.
The exception to this warning is SOA Suite for OIM. SOA Suite is a prerequisite for OIM and as part of the stack that supports OIM should be installed in the same domain as OIM. However, if you use SOA suite for other purposes, you should consider setting up a separate install for running your own services, composites, BPEL processes etc.
Let’s Review My Second Post
In my second post I discussed whether or not to split up products from the same package into multiple domains.  I argued that while the incompatibility issue isn’t such a big deal but that the additional flexibility of deploying in different domains could still be beneficial.  I did warn though that OIM/OAM/OAAM integration is only supported with all the software running in the same domain.
Advice Revisited
A year and a half later, I still strongly stand by the advice in my posts.  I’ll put it this way, I’ve worked with people who regret deploying multiple packages into one domain for all the reasons I mentioned but all the people I’ve worked with who have gone with a split domain model across packages have been happy and glad they’ve gone that route.
That being said, there are a few clarifications and updates to my advice that I’d like to now make.
It’s About Middleware Homes (Install Locations) Not Just Domains
One thing that I want to make clear is that my advice about separating the products from different packages extends beyond domains to Middleware Homes which are the root directories for the Middleware products that you are installing together.  To avoid the conflicts and restrictions that you can run into by installing multiple packages together you need to install them in separate Middleware Homes, just separate domains is not enough.  It is true that most people just create one domain per Middleware Home but I’ve seen some examples of multiple domains in a Middleware Home recently so I thought this was definitely worth mentioning.
Domain Architecture for OAM/OIM Integration
The limitation that you have to put OAM and OIM in the same domain if you want the integration to work has, shall we say softened.  I would still say putting them in the same domain is the easiest and most tried and true path (the yellow brick road if you will).  However, if you utilize a real OHS based OAM agent rather than the domain agent; you can make the integration work with OAM and OIM in separate domains.  In fact, the most recent IDM EDG for FA includes this option.
Fusion Applications
Speaking of Fusion Apps, I do want to discuss this advice in the context of Fusion Apps.  As I describe here, the IDM build out for Fusion Apps is prescribed very specific process with a specific architecture.  It is very likely you will run into issues deploying Fusion Apps on top of an IDM environment that deviates from this architecture as FA has made certain assumptions about how the IDM environment will be set up.
In the context of the domain discussion, the IDM EDG for FA does steer you down the path of creating one big IDM domain that contains OID, OVD, OAM, OIM, ODSM, and potentially OIF.  This is of course contrary to the advice I’ve been giving.  In the context of FA though I think this is OK.
When using the IDM products with FA you are restricted to very specific versions of the IDM products in the stack, so upgrade flexibility and incompatibility concerns isn’t really an issue.  Also, as I said, the FA provisioning process may be expecting the Oracle IDM/IAM products to be deployed in a single domain.  So, for FA I advice people to follow the EDG and stick with the single domain model or moving forward whatever models are documented in the IDM EDG for FA.

Using a Web Proxy Server with WebCenter Family

The use of a Web Tier is always recommended in a production environment, for security, performance and better control and load management, no matter if is a Intranet, internet or extranet environment.

The most common use for the Web Tier with WebCenter is acting as Reverse Proxy to forward all requests to a frontend WebCenter site to the application server (Figure A), but there’s to many ways to do an enterprise deployment with a web tier and many flavors of web servers and load balancing options.

FigA

Let’s start with the WebLogic plugin with supported webservers. You can found the standard plugins in your WebLogic instalation dir, that will something like “%WEBLOGIC_HOME%/server/plugin/%OS%/”, but I recommend you to download the latest plugin from Oracle’s OTN or eDelivery websites, you will found the Oracle WebLogic Server Web Server Plugins 1.1 or later. With the version 1.1 you will found plugins for Apache 2.2.x (32-Bits and 64-Bits) and for IIS 6+ and IIS 7+. Always confirm the OS support with the WLS Plugin Support Matrix. If you need support for iPlanet 6+ or 7+, you can use the version 1.0. Any throuble to found the files, try search or create to the Oracle support (Ref.: Doc ID 1111903.1).

FigB

Some times you do not want to use a separate web server, you can use the WebLogic as a Web Server by using a servlet, but this is subject for another post.

When you are deploying a WebCenter solution using a proxy server, you need to remember to proxy all your requests from all weblogics for Security, performance and Control and also all non-weblogic requests such as static files, WebCenter Content custom requests, Services, Portlets and 3rd Party.

Is common in intranet deployments you see calls direct to the application servers, calls to several different servers, like the Figure C.

FigC

Above you can see the same sample with all calls using the web server, for proxy forward, for reverse proxy, static files caching, even the use of Oracle Coherence is easy when you have a WebCenter Spaces + Content deployment.

FigD

We cannot forget to talk about the clustering and load balancers, Clustering is easy done by the WebLogic, you just need to follow the documentation. For Load balancing you need to choose what kind and what load balancer you will use.

LOAD BALANCING

You can do load balancing using the Web Proxy Server with WLS Plugin or a servlet as mentioned above or a External Load Balancer (Hardware) or Appliance.

Using the WLS Plugin you will need to remember to create a entry for WebCenter Server that are you using, that means that you need to create a entry for the /webcenter/ another for the /cs/, another for the “custom sitestudio” calls, for the portlets (If you do not create a parent folder for the portlets, you will need to create a entry for each portlet) and an entry for any other 3rd calls.

The configuration file for each entry will looks like this sample for IIS7:

 

# Changed by Oracle A-Team (Adao.Junior)
# Date: 07/31/2011
# WebCenter Content: CUSTOMER_WEB_SERVER
# WLSPlugin1.1-IIS6-IIS7-win64-x64

WebLogicCluster=192.168.100.101:8888,192.168.100.102:8888,192.168.100.101:8891,192.168.100.102:8891
ConnectTimeoutSecs=25
ConnectRetrySecs=5
KeepAliveEnabled=true
FileCaching=ON
SecureProxy=OFF

Debug=OFF
WLTempDir=C:\DEBUG\CONTENT

 

For a External Load Balancer (Hardware) you have many options, such as the f5 Big-IP, a guide to help you deploy with WebCenter could be found here.

 

There’s a option to use a hybrid configuration, with Hardware loadbalancers and Web Proxy to handle the web calls between the users and web farms, and web proxies and between the proxies and applications servers.

Bearer Confirmation Method (Huh! What is it good for…)

For starters, allow me to introduce myself. My name is Brian Eidelman and I am a new member of the Fusion Middleware Architecture Group (a.k.a the A-Team) and a new contributor to this blog. Since the memo has gone out that in addition to security we will be discussing dogs now, I’d like to introduce my faithful companion Coco.

Now without further ado, my post:

The WSS SAML Token Profile Specification defines several “confirmation methods” by which the contents of the SAML assertion can be linked to the SOAP message content itself. In other words the method of proving that the assertion really goes with the message that it is being sent in.

The “bearer” confirmation method is sort of peculiar in that it defines no process at all for proving the link between the contents of the assertion and the message content. Rather, the link is to be implicitly trusted.

At this point you may be saying to yourself, if there is no means of verifying that the SAML assertion goes with the message, then what good is it?

Well, the trust can be implicit for any number of reasons. It could just be that trusting developers created the service. More likely however, the link between the SAML assertion and the message can be implicitly trusted by the service because the integrity of the link has been delegated to some other external factor; usually to the network level.

In some cases we could be talking about an internal network setup so that all requests to the service are guaranteed to come from a tamper proof trusted client (if you aren’t buying into this, just humor me). In other cases we could be talking about SSL with 2-way authentication. The point is that the service can trust that only a proper trusted client can successfully get a message to it in the first place.

Now at this point you might be thinking to yourself, fine but then how is SAML with bearer confirmation different than just including a username token with no password in the message header.
Well, SAML with bearer confirmation offers a few additional advantages over the plain old username token. Foremost, the assertion can contain not just a username (subject) but also a bundle of attributes that can further serve to identify the user, define a users’ roles, or be otherwise consumed by the service. In addition, the assertion does capture the notion of what entity is “issueing” the assertion (asserting the user identity). Lastly, some SOA stacks may not be able to handle a username token with no password.
So after all that, what is the bearer confirmation good for? Given that it allows us to utilize assertions without the hassles and costs that come with the signing and key references that are a part of the other confirmation methods, bearer is the perfect confirmation method for basic identity propagation to or between internal services. A similar use case where bearer may fit the bill is identity propagation from trusted intermediary that maybe be doing the real authentication to the service over a securely established network connection.