Oracle File System – Storing files in the DBCS via NFS

February 28, 2018 | 6 minute read
Roland Koenn
Consulting Solution Architect
Text Size 100%:

Introduction

Oracle Database Cloud Service can easily be configured to expose Oracle File Systems via Network File System (NFS). This is available starting with Oracle Database 12c Release 2. This feature allows a client to mount the NFS share without having to configure the Oracle Database File System DBFS Client – therefore simplifying the rollout and reduce the overall troubleshooting of clients. This feature allows full Disaster Recovery capability in conjunction with Data Guard when configured for the underlying database.

 

Setup in the Cloud Console

This configuration requires a deployed DBCS instance. It has to be Release 12.2+ as shown in the picture below.

image1

It is a good practise to keep filesystems for OFS separate from the other database files to avoid stopping the database when the filesystem runs full. In order to add additional storage - navigate to the Compute Console and open the storage tab in order to click Create Storage Volume.

image2

For this example, I am going with 100 GB locally mount storage. There is no RAC or ASM involved – the procedure for RAC is described here.  Additional files can be added later as Logical Volumes will be used.

image3

Next attach the storage volume to the corresponding instance as shown below.

image4

Attach the disk as number 6 will map it to /dev/xvdg – 7 would map to /dev/xvdh and so on. Details can be found here.

image5

Setup in the Operating System

Login to the DBCS instance as user opc using SSH Keys and switch to the root user.

Install the packages nfs-utils, fuse, fuse-libs and kernel-devel. Yum is preconfigured.

[root@ofsdemo ~]# yum -y install nfs-utils fuse fuse-libs kernel-devel

 

The user oracle will need access to the fuse utilities – hence it needs to be added to the fuse group. Use the id command to verify that oracle has fuse as a secondary group.

[root@ofsdemo ~]# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba) [root@ofsdemo ~]# usermod -a -G fuse oracle [root@ofsdemo ~]# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),498(fuse)

Execution rights are disabled by default – so it needs to be added to /usr/bin/fusermount.

[root@ofsdemo ~]# chmod 750 /usr/bin/fusermount

Additionally non-root user – like oracle - are restricted to use fuse. They need to be enabled in the /etc/fuse.conf

[root@ofsdemo ~]# echo 'user_allow_other' >> /etc/fuse.conf

The newly added storage volume needs to configured in order to be used by the database. First a physical volume needs to be created.

[root@ofsdemo /]# pvcreate /dev/xvdg Physical volume "/dev/xvdg" successfully created

Then a volume group needs to be created – additional physical volumes can be added at a later stage to extend the available space.

[root@ofsdemo /]# vgcreate ofsVolGroup /dev/xvdg
Volume group "ofsVolGroup" successfully created

The tool pvscan allows to display all current physical volumes and groups – helping to ensure everything went well.

[root@ofsdemo /]# pvscan PV /dev/xvdg    VG ofsVolGroup     lvm2 [100.00 GiB / 100.00 GiB free] PV /dev/xvdf1   VG dataVolGroup    lvm2 [25.00 GiB / 0    free] PV /dev/xvdd1   VG fraVolGroup     lvm2 [7.00 GiB / 0    free] PV /dev/xvdc1   VG redoVolGroup    lvm2 [26.00 GiB / 0    free] PV /dev/xvdb2   VG vg_main         lvm2 [29.50 GiB / 0    free] Total: 5 [187.48 GiB] / in use: 5 [187.48 GiB] / in no VG: 0 [0   ]

In this example I am using the complete space available on the storage volume – hence +100%FREE is specified when creating the logical volume.

[root@ofsdemo /]# lvcreate -l +100%FREE ofsVolGroup /dev/xvdg Logical volume "lvol0" created.

The tool lvscan helps to check if everything is in order after creation of the logical volume.

[root@ofsdemo /]# lvscan ACTIVE            '/dev/ofsVolGroup/lvol0' [100.00 GiB] inherit ACTIVE            '/dev/dataVolGroup/lvol0' [25.00 GiB] inherit ACTIVE            '/dev/fraVolGroup/lvol0' [7.00 GiB] inherit ACTIVE            '/dev/redoVolGroup/lvol0' [26.00 GiB] inherit ACTIVE            '/dev/vg_main/lv_swap' [4.00 GiB] inherit ACTIVE            '/dev/vg_main/lv_root' [25.50 GiB] inherit

As a next step, a file system needs to initialized on the Volume Group. In this example, ext4 is used as currently this is the default for DBCS. Note that the file system is labelled as OFS with the –L option.

[root@ofsdemo /]# mkfs.ext4 -L OFS /dev/ofsVolGroup/lvol0

To mount this file system on every reboot it needs to be added to the /etc/fstab. Referencing the label OFS makes sure that even if the sequence of the storage volumes or logical volumes changes, the correct file system is presented on /u05.

[root@ofsdemo /]# echo 'LABEL=OFS           /u05                    ext4    defaults,nodev,_netdev,nofail         0 0' >> /etc/fstab [root@ofsdemo /]# mkdir /u05 [root@ofsdemo /]# mount -a

Restart the instance via the Cloud Console after this step to ensure everything works fine.

 

Setup in the Database

Set the environment to the name you gave the database while creating – the default is ORCL and login to the Root Container of the Database as sys.

[oracle@ofsdemo ~]$ . oraenv ORACLE_SID = [ORCL] ? ORCL The Oracle base remains unchanged with value /u01/app/oracle [oracle@ofsdemo ~]$ sqlplus / as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 26 04:34:37 2018 Copyright (c) 1982, 2016, Oracle.  All rights reserved. Connected to: Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production SQL> show con_name CON_NAME ------------------------------ CDB$ROOT

Create a tablespace to hold the OFS file system – make sure to choose a size and growth rate (on next clause) that matches the requirements of the application, e.g. if only 1 GB files are being uploaded – choose at least 1g as next. Big file tablespaces are an option, but most customers prefer the smaller default datafiles – limits are described here.

SQL> create tablespace OFS_DATA datafile '/u05/app/oracle/oradata/ORCL/ofs_data01.dbf' size 100m autoextend on next 10m; Tablespace created.

As next step the Oracle Filesystem has to be initialized for the newly tablespace.

SQL> exec   DBMS_FS.make_oracle_fs (    fstype    => 'ofs',    fsname    => 'OFS_FS01',    fsoptions => 'TABLESPACE=OFS_DATA');

Finally the filesystem has to be mounted.

exec DBMS_FS.mount_oracle_fs ( fstype => 'ofs', fsname => 'OFS_FS01',    mount_point      => '/u05/ofs/ofs_filesystem01' , mount_options  => 'default_permissions, allow_other, persist' );

It can be viewed on the filesystem – please note that the size has to be monitored via the database – the df output is not correct – see limitations below.

[oracle@ofsdemo ~]# df -h | grep fuse /dev/fuse             32G     0   32G   0% /u05/ofs/ofs_filesystem01

 

Setup in the NFS Server

In order to make the above OFS file system available to clients it needs to be exposed using the NFS server. Add the file system to the central NFS configuration file /etc/exports.

[opc@ofsdemo ~]# sudo su - [root@ofsdemo ~]# echo '/u05/ofs/ofs_filesystem01 *(rw,fsid=1)' >> /etc/exports

Start the NFS server and enable it to start on every reboot:

[root@ofsdemo ~]# service nfs start Starting NFS services:                                     [  OK  ] Starting NFS mountd:                                       [  OK  ] Starting NFS daemon:                                       [  OK  ] [root@ofsdemo ~]# chkconfig nfs on                         [  OK  ]

Validate the export using the showmount command.

[root@ofsdemo ~]# showmount -e

Export list for ofsdemo:

/u05/ofs/ofs_filesystem01 *

 

Setup in the Client

Mounting the exported NFS Mount is straight forward. You will need to provide the Public IP address.

[root@ofsclient ~]# echo '129.xxx.xxx.193:/u05/ofs/ofs_filesystem01  /u01/nfsmount  nfs defaults 0 0' >> /etc/fstab [root@ofsclient ~]# mount –a [root@ofsclient ~]# df -h |grep nfsmount 32G   0   32G   0% /u01/nfsmount

All files written to the /u01/nfsmount will be automatically stored in the database and they are backed up with the database as well. You can mount the NFS filesystem from multiple clients making it a great option for a shared file system, if that should be required.

 

Tuning

OFS worker threads are created based on the OFS_THREADS parameter – the support range is 2-128 with 4 being the default. Again testing the performance results in the specific environment and use case should be tested intensively.

The view V$OFS_STATS gives a good indicator what’s happening under the covers and helps troubleshooting.

 

Limitation

OFS has a few limitations so extensive testing is highly recommend. Most notably are the following issues, which tend to lead to confusion:

- OFS Filesystem can only be created in a root container or non CDB instance.

- DBFS mounted with ASM storage shows wrong mount size.

- OFS mounted with local storage shows wrong mount size.

 

Roland Koenn

Consulting Solution Architect


Previous Post

Testing Latency and Throughput

Michael Shanley | 8 min read

Next Post


Configuring SOACS for BPEL invocations of OAuth2 protected services

Mike Muller | 6 min read