Oracle Linux 7 attached SAN storage file system issue

My this post is focused on the recent issue faced during usage of SAN storage for setting up Oracle SOA Suite 12c in a clustered environment. Below is the summary of the same-
  1. I have two VMs, SOAHOST1 & SOAHOST2 for setting up SOA Cluster.
  2. I have exposed a SAN storage capacity of 500GB on both VMs.
  3. Using fdisk utility in linux, the disk was formatted using the XFS file system with the assumption that Linux 7 uses XFS file system so SAN disk to be used with this VM should also have the same file system.
  4. Now I mounted the SAN disk /dev/xvdc on both VMs and able to access the storage from both VMs successfully.
Problem Statement-
  1. Although the storage was mounted on both VMs but parallel Read/Write was not visible i.e. data written by one VM on SAN was not visible at second VM until the storage is refreshed (by remounting the SAN or restarting the VM)
  2. Storage got detached abruptly and causing Input/Output error on the mount point on either one or both VMs. This has caused AdminServer in failed stated that was configured on SAN disk only.
Recommendation/Solution-
  1. Currently, we are setting up a clustered environment for SOA Suite 12c  but used XFS file system for SAN storage formatting which is not a shared file system and hence it is not the expected behaviour of XFS based disk to allow read/write from more than one node (machine) in the case mounted on both systems.
  2. In this scenario, the data written by one node will not be visible to the second node until it is refreshed on the second node viz. either remount or restart the machine.
  3. Also, there are high chances of SAN disk corruption, the mechanism we have done so far.
  4. Recommended file system to be used in this scenario should be a shared/clustered file system like NFS/OCFS etc.
  5. So, it is better to understand the business requirements first and accordingly, the disk should be formatted to be used further.
Repairing the XFS file system based SAN disk-
There are chances of disk corruption as stated in solution part, so let's see how we can repair the XFS file system based disk and mount it again on VM-


[root@DC-SOABPM-01 dev]# xfs_repair /dev/xvdc 
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

[root@DC-SOABPM-01 dev]# xfs_repair -L /dev/xvdc 
[root@DC-SOABPM-01 dev]# mount -t xfs /dev/xvdc /u01/oracle

Important-
This solves the disk repairing problem. But sometimes it is dangerous to do so, as it might cause entire disk corruption permanently. So better is to have a backup of SAN disk prior to using disk repair.












Comments

Post a Comment

Popular posts from this blog

Oracle SOA Suite- Implementing Email Notification

Oracle SOA Suite 12c- PKIX path building failed & unable to find valid certification path to requested target

Migration of Oracle SOA Suite Composite from 11g to 12c