Network File System (NFS) is used with the majority of Fusion Applications deployments to satisfy the requirement for shared storage. Using NFS correctly can present a challenge, because there is not a single right way to mount NFS shares in Oracle Linux and there are multiple other potential sources for errors like the network. Although most of the time incorrect or suboptimal settings will have servere impacts on the performance. With this blog entry I will try to provide a baseline for most environments that can be a basis for a review of the current or planned settings.
There is no silver bullet that will magically solve all I/O issues in regards to performance – there is however something that will help you solve most performance issues in your environment: Testing. But let’s first have a look the recommended parameter for the Fusion Applications (FA) and Identity Management (IDM) shares that have to be created prior to the provisioning.
Please note that these are not applicable across all environments however they are intended to provide a solid starting point for further performance testing. In details these setting stand for:
rw – this options specifies that the mount is supposed to be mounted in read write mode. Alternatives would be ro for read only access.
bg – determines the mount failure handling – bg stands for background mount as opposed to fg foreground mount. Foreground mount errors out after one attempt and bg forks a process in the background to retry.
hard – makes sure that if the share is not available the program using the share cannot simply exit. The alternative soft simply gives an error message, when the share is offline.
nointr (deprecated & no need with UEK kernel) – this setting handles the interrupt handling of the share. Basically this adjusts the behavior how signals for interruption of an operation gets handled. The alternative is intr allow interruptions – nointr does not.
rsize and wsize – these option adjust the data block size that is transmitted. The value is set in bytes. Based on the network and data loads this is usually a good starting point for performance improvement.
tcp – specifies to use the TCP protocol as opposed to udp for the UDP protocol
vers – specifies which version of NFS is used. NFS version 4 is fully supported. Most implementations are more successful with version 3. Also allowed is nfsvers
timeo – this is set in tenth of a second. It determines the time before retransmission after a RPC timeout. Especially in busy networks this is an important setting to keep an eye on.
They only reliable way to make sure that the system is performing properly is to test. Those tests don’t have to be very time consuming as a starting point the simple perl script nfsSpeedTest (https://github.com/sabujp/nfsSpeedTest) provides a simple but sufficient tool that is built around dd to establish a performance baseline.
In this example the script is run on a standard share that is mounted using the recommended options – note the read take about 19 seconds.
Now I have added actimeo=0 just to show how a single setting can have significant impact. The read performance has degraded significantly. Now the read takes 152 seconds.
There are a number of sources that recommend setting actimeo=0 and that is correct for some environments, e.g. it’s required for datafiles in a RAC database environments. However it should generally not be used for the FA shared file system and most binaries in general. actimeo=0 actually overrides a number of defaults: acregmin, acregmax, acdirmin, acdirmax and with this it’s disabling most of the file caching.
If you encounter any issues with NFS always check /var/log/messages as well as the logs for the system exporting the NFS mount, e.g. a NAS Storage or another Linux host. Most issues with NFS are actually related to the network not to NFS itself.
How To Measure NFS Performance ? (Doc ID 1517695.1)
Howto Optimize NFS Performance with NFS options. (Doc ID 397194.1)
Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)