Thursday, January 14, 2016

Solaris/VxVM - Restore a Private Region on a VxVM disk.

PROBLEM: You are getting the following error when importing a disk group
# vxdg -C import kchdg
VxVM vxdg ERROR V-5-1-587 Disk group kchdg: import failed: Disk group has no valid configuration
A call was then opened with Veritas that requested the output of the following command which checks the configuration.
# vxdisk -o alldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
Disk_0       auto:SVM        -            -            SVM
Disk_1       auto:SVM        -            -            SVM
EMC3_06B7    auto:cdsdisk    -            -            online
EMC3_054D    auto:cdsdisk    -            -            online
EMC4_0609    auto:cdsdisk    -            -            online
EMC4_0926    auto:cdsdisk    -            -            online
EMC5_2DEF    auto:cdsdisk    -            (kchquorumdg)    online
EMC5_2D70    auto:cdsdisk    -            (kchdg) online
EMC6_17A0    auto:cdsdisk    -            (kchquorumdg)    online
EMC6_27C6    auto:cdsdisk    -            (kchquorumdg) online
Get the disk names of the problem diskgroup. In this case it is EMC5_2D70
Get the underlying disks that are associated with this device.
# vxdisk list EMC5_2D70| tail -2
c3t5006048C52A863DDd144s2       state=enabled
c4t5006048C52A863D2d144s2       state=enabled

Take the cXtXdX name and run the following command
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c3t5006048C52A863DDd144s2 |vxprint -D - -ht

This will dump the config of the private region on the disk, from this you can see if there are duplicates where there shouldn't be, if there is then your private region is corrupt and needs recreated.
First step to recreate and not have to reinitialize the disks etc. is to try and use a previous backup of the disk group. These are taken every time the disk group is updates so fingers crossed you have one.
# cd /etc/vx/cbr/bk/kchdg.1333032959.69.kchnode1
# ls -l
-rw-r--r--   1 root     root      655360 Apr  3 17:36 1333032959.69.kchnode1.binconfig
-rw-r--r--   1 root     root      655360 Apr  3 13:36 1333032959.69.kchnode1.binconfig.1
-rw-r--r--   1 root     root       15145 Apr  3 17:36 1333032959.69.kchnode1.cfgrec
-rw-r--r--   1 root     root       15070 Apr  3 13:36 1333032959.69.kchnode1.cfgrec.1
-rw-r--r--   1 root     root        1804 Apr  3 17:36 1333032959.69.kchnode1.dginfo
-rw-r--r--   1 root     root        1804 Apr  3 13:36 1333032959.69.kchnode1.dginfo.1
-rw-r--r--   1 root     root        2544 Apr  3 17:36 1333032959.69.kchnode1.diskinfo
-rw-r--r--   1 root     root        2544 Apr  3 13:36 1333032959.69.kchnode1.diskinfo.1

Try and restore from a previous config

# cat 1333032959.69.kchnode1.cfgrec |vxprint -D - -ht

# /opt/VRTS/bin/vxconfigrestore -p kchdg
Now the above command will either work or it won't, if it does then YAY if it doesn't then you will have to reinitialize the disks and volume group. But we are only touching the Private Region so all the data SHOULD be fine if we get our numbers correct.
Ok to reinitialize your private region you need a few imports bits of information, here is a sample output of the disk i am going to reinitialize.
# vxdisk list EMC5_2D70 | egrep -i '^public|^private'
public:    slice=2 offset=65792 len=4390528 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
Ok here is the command to initialize the disk, and the information for the command was found in the above output.
# vxdisk -f init EMC5_2D70 privoffset=256 privlen=65536 puboffset=65792 publen=4390528
Hope that makes sense?
Next you need to tell it what the subdisks are you can get this info from your previous config
# cat 1333032959.69.kchnode1.cfgrec |vxprint -D - -ht
Disk group: kchdg

DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID
ST NAME         STATE        DM_CNT   SPARE_CNT         APPVOL_CNT
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL
RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK
CO NAME         CACHEVOL     KSTATE   STATE
VT NAME         RVG          KSTATE   STATE    NVOLUME
V  NAME         RVG/VSET/CO  KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
DC NAME         PARENTVOL    LOGVOL
SP NAME         SNAPVOL      DCO
EX NAME         ASSOC        VC                       PERMS    MODE     STATE
SR NAME         KSTATE

dg kchdg      default      default  32000    1141998863.35.acslsn

dm kchdg01    EMC5_2D70     auto     2048     35688576 -

v  kchbackup  -            ENABLED  ACTIVE   14714880 SELECT    -        fsgen
pl kchbackup-01 kchbackup ENABLED ACTIVE   14714880 CONCAT    -        RW
sd kchdg01-01 kchbackup-01 kchdg01 20972576 14714880 0      EMC5_2D70 ENA
pl acslsbackup-02 kchbackup ENABLED ACTIVE   LOGONLY  CONCAT    -        RW
sd kchdg01-02 kchbackup-02 kchdg01 20972048 528   LOG       EMC5_2D70 ENA

v  kchhome    -            ENABLED  ACTIVE   20971520 SELECT    -        fsgen
pl kchhome-01  kchhome    ENABLED  ACTIVE   20971520 CONCAT    -        RW
sd kchdg01-01 kchhome-01  kchdg01 528     20971520 0         EMC5_2D70 ENA
pl kchhome-02  kchhome    ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW
sd kchdg01-02 kchhome-02  kchdg01 0       528      LOG       EMC5_2D70 ENA
Create a file that will make all your disk configuration, this is then applied to the diskgroup to set it up as before
# cat 1333032959.69.kchnode1.cfgrec |vxprint -D - -mvphsr > vxmake.out
Initialize the disk EMC5_2D70 to what the dm device was before which in this occasion was kchdg01
# vxdg init kchdg kchdg01=EMC5_2D70
Now update the diskgroup information with the vxmake.out file you created. This will restore it to its former glory
# vxmake -g kchdg -d vxmake.out
Start the volumes
# vxvol -g kchdg -f startall
When you run a vxprint -htg kchdg you will notice that not all the volumes will be online.
# vxvol -g kchdg      init active kchbackup 
# vxvol -g kchdg      init active kchhome
Now all should look good when you run
# vxprint -htg kchdg
And continue to mount your filesystems. You might have to perform fsck on it first but all should be fine…
# mount /dev/vx/dsk/kchdg/kchhome /export/home

Wednesday, January 13, 2016

Solaris/VxVM/CFS/VCS - Implementing Cluster File System (CFS) over VCS

CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.

The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.

The examples here are:
1.       Based on VCS 5.x but should also work on 4.x
2.       A new 4 node cluster with no resources defined.
3.       Diskgroups and volumes will be created and shared across all nodes.

Before you configure CFS

1.       Make sure you have an established Cluster and running properly.
2.       Make sure these packages are installed on all nodes:
a.       VRTScavf Veritas cfs and cvm agents by Symantec
b.      VRTSglm Veritas LOCK MGR by Symantec
3.       Make sure you have a license installed for Veritas CFS on all nodes.
4.       Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).

Check the status of the cluster
Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured yet.

# cfscluster status
  NODE         CLUSTER MANAGER STATE            CVM STATE
serverA        running                        not-running                   
serverB        running                        not-running                   
serverC        running                        not-running                   
serverD        running                        not-running                   

  Error: V-35-41: Cluster not configured for data sharing application

# vxdctl -c mode
mode: enabled: cluster inactive

# /etc/vx/bin/vxclustadm nidmap
Out of cluster: No mapping information available

# /etc/vx/bin/vxclustadm -v nodestate
state: out of cluster

# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen             

A  serverA             RUNNING              0                   
A  serverB             RUNNING              0                   
A  serverC             RUNNING              0                   
A  serverD             RUNNING              0

Configure the cluster for CFS

During configuration, veritas will pick up all information that is set on your cluster configuration. And will activate CVM on all the nodes.

# cfscluster config
 
        The cluster configuration information as read from cluster
        configuration file is as follows.
                Cluster : MyCluster
                Nodes   : serverA serverB serverC serverD

 
        You will now be prompted to enter the information pertaining
        to the cluster and the individual nodes.
 
        Specify whether you would like to use GAB messaging or TCP/UDP
        messaging. If you choose gab messaging then you will not have
        to configure IP addresses. Otherwise you will have to provide
        IP addresses for all the nodes in the cluster.
  
        ------- Following is the summary of the information: ------
                Cluster         : MyCluster
                Nodes           : serverA serverB serverC serverD
                Transport       : gab
        -----------------------------------------------------------

 
        Waiting for the new configuration to be added.

        ========================================================

        Cluster File System Configuration is in progress...
        cfscluster: CFS Cluster Configured Successfully

Check the status of the cluster

Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is required to be online before we can bring up any clustered filesystem on the nodes.

# cfscluster status

  Node             :  serverA
  Cluster Manager  :  running
  CVM state        :  running
  No mount point registered with cluster configuration


  Node             :  serverB
  Cluster Manager  :  running
  CVM state        :  running
  No mount point registered with cluster configuration


  Node             :  serverC
  Cluster Manager  :  running
  CVM state        :  running
  No mount point registered with cluster configuration


  Node             :  serverD
  Cluster Manager  :  running
  CVM state        :  running
  No mount point registered with cluster configuration

# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: serverA

# /etc/vx/bin/vxclustadm nidmap
Name                             CVM Nid    CM Nid     State
serverA                         0          0          Joined: Master
serverB                         1          1          Joined: Slave
serverC                         2          2          Joined: Slave
serverD                         3          3          Joined: Slave

# /etc/vx/bin/vxclustadm -v nodestate
state: cluster member
        nodeId=0
        masterId=1
        neighborId=1
        members=0xf
        joiners=0x0
        leavers=0x0
        reconfig_seqnum=0xf0a810
        vxfen=off

# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen             

A  serverA             RUNNING              0                   
A  serverB             RUNNING              0                   
A  serverC             RUNNING              0                   
A  serverD             RUNNING              0                   

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State         

B  cvm             serverA             Y          N               ONLINE        
B  cvm             serverB             Y          N               ONLINE        
B  cvm             serverC             Y          N               ONLINE        
B  cvm             serverD             Y          N               ONLINE


Creating a Shared Disk Group and Volumes/Filesystems

This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager.

When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk.

Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes.

First, make sure you are on the master node:

serverA # vxdctl -c mode
mode: enabled: cluster active - MASTER
master: serverA

Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format.

serverA # vxdisksetup -if EMC0_1 format=cdsdisk
serverA # vxdisksetup -if EMC0_2 format=cdsdisk

Create a shared disk group with the disks you just initialized.

serverA # vxdg -s init mysharedg mysharedg01=EMC0_1 mysharedg02=EMC0_2

serverA # vxdg list
mysharedg    enabled,shared,cds   1231954112.163.serverA

Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).

serverA # cfsdgadm add mysharedg all=sw
  Disk Group is being added to cluster configuration...

Verify that the cluster configuration has been updated.

serverA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf
                ActivationMode @serverA = { mysharedg = sw }
                ActivationMode @serverB = { mysharedg = sw }
                ActivationMode @serverC = { mysharedg = sw }
                ActivationMode @serverD = { mysharedg = sw }

serverA # cfsdgadm display
  Node Name : serverA
  DISK GROUP              ACTIVATION MODE
    mysharedg                    sw

  Node Name : serverB
  DISK GROUP              ACTIVATION MODE
    mysharedg                    sw

  Node Name : serverC
  DISK GROUP              ACTIVATION MODE
    mysharedg                    sw

  Node Name : serverD
  DISK GROUP              ACTIVATION MODE
    mysharedg                    sw

We can now create volumes and filesystems within the shared diskgroup.

serverA # vxassist -g mysharedg make mysharevol1 100g
serverA # vxassist -g mysharedg make mysharevol2 100g

serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1
serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2

Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.

serverA # cfsmntadm add mysharedg mysharevol1 /mountpoint1
  Mount Point is being added...
  /mountpoint1 added to the cluster-configuration

serverA # cfsmntadm add mysharedg mysharevol2 /mountpoint2
  Mount Point is being added...
  /mountpoint2 added to the cluster-configuration

Display the CFS mount configurations in the cluster.

serverA # cfsmntadm display -v
  Cluster Configuration for Node: apqma519
  MOUNT POINT        TYPE      SHARED VOLUME     DISK GROUP       STATUS        MOUNT OPTIONS
  /mountpoint1    Regular      mysharevol1       mysharedg        NOT MOUNTED   crw
  /mountpoint2    Regular      mysharevol2       mysharedg        NOT MOUNTED   crw

That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.

serverA # hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen             

A  serverA             RUNNING              0                   
A  serverB             RUNNING              0                   
A  serverC             RUNNING              0                   
A  serverD             RUNNING              0                   

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State         

B  cvm             serverA             Y          N               ONLINE        
B  cvm             serverB             Y          N               ONLINE        
B  cvm             serverC             Y          N               ONLINE        
B  cvm             serverD             Y          N               ONLINE
B  vrts_vea_cfs_int_cfsmount1 serverA             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount1 serverB             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount1 serverC             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount1 serverD             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount2 serverA             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount2 serverB             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount2 serverC             Y          N               OFFLINE
B  vrts_vea_cfs_int_cfsmount2 serverD             Y          N               OFFLINE


Each volume will have its own Service group and looks really ugly, so you may want to modify your main.cf file and group them

Solaris/SAN - To verify whether an HBA is connected to a fabric or not

# /usr/sbin/luxadm -e port

Found path to 4 HBA ports

/devices/pci@1e,600000/SUNW,qlc@3/fp@0,0:devctl              CONNECTED
/devices/pci@1e,600000/SUNW,qlc@3,1/fp@0,0:devctl            NOT CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4/fp@0,0:devctl              CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4,1/fp@0,0:devctl            NOT CONNECTED

Your SAN administrator will ask for the WWNs for Zoning. Here are some steps I use to get that information:

# prtconf -vp | grep wwn
            port-wwn:  210000e0.8b1d8d7d
            node-wwn:  200000e0.8b1d8d7d
            port-wwn:  210100e0.8b3d8d7d
            node-wwn:  200000e0.8b3d8d7d
            port-wwn:  210000e0.8b1eaeb0
            node-wwn:  200000e0.8b1eaeb0
            port-wwn:  210100e0.8b3eaeb0
            node-wwn:  200000e0.8b3eaeb0

Or you may use fcinfo, if installed.

# fcinfo hba-port
HBA Port WWN: 210000e08b8600c8
        OS Device Name: /dev/cfg/c11
        Manufacturer: QLogic Corp.
        Model: 375-3108-xx
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200000e08b8600c8
HBA Port WWN: 210100e08ba600c8
        OS Device Name: /dev/cfg/c12
        Manufacturer: QLogic Corp.
        Model: 375-3108-xx
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200100e08ba600c8
HBA Port WWN: 210000e08b86a1cc
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 375-3108-xx
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200000e08b86a1cc
HBA Port WWN: 210100e08ba6a1cc
        OS Device Name: /dev/cfg/c6
        Manufacturer: QLogic Corp.
        Model: 375-3108-xx
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200100e08ba6a1cc

Here are some commands you can use for QLogic Adapters:

# modinfo | grep qlc
 76 7ba9e000  cdff8 282   1  qlc (SunFC Qlogic FCA v20060630-2.16)

# prtdiag | grep qlc
pci    66         PCI5  SUNW,qlc-pci1077,2312 (scsi-+
                  okay  /ssm@0,0/pci@18,600000/SUNW,qlc@1
pci    66         PCI5  SUNW,qlc-pci1077,2312 (scsi-+
                  okay  /ssm@0,0/pci@18,600000/SUNW,qlc@1,1
pci    33         PCI2  SUNW,qlc-pci1077,2312 (scsi-+
                  okay  /ssm@0,0/pci@19,700000/SUNW,qlc@1
pci    33         PCI2  SUNW,qlc-pci1077,2312 (scsi-+
                  okay  /ssm@0,0/pci@19,700000/SUNW,qlc@1,1

# luxadm qlgc

  Found Path to 4 FC100/P, ISP2200, ISP23xx Devices

  Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
  Detected FCode Version:       ISP2312 Host Adapter Driver: 1.14.09 03/08/04

  Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1/fp@0,0:devctl
  Detected FCode Version:       ISP2312 Host Adapter Driver: 1.14.09 03/08/04

  Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0:devctl
  Detected FCode Version:       ISP2312 Host Adapter Driver: 1.14.09 03/08/04

  Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1/fp@0,0:devctl
  Detected FCode Version:       ISP2312 Host Adapter Driver: 1.14.09 03/08/04
  Complete


# luxadm -e dump_map /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    1f0112  0         5006048accab4f8d 5006048accab4f8d 0x0  (Disk device)
1    1f011f  0         5006048accab4e0d 5006048accab4e0d 0x0  (Disk device)
2    1f012e  0         5006048acc7034cd 5006048acc7034cd 0x0  (Disk device)
3    1f0135  0         5006048accb4fc0d 5006048accb4fc0d 0x0  (Disk device)
4    1f02ef  0         50060163306043b6 50060160b06043b6 0x0  (Disk device)
5    1f06ef  0         5006016b306043b6 50060160b06043b6 0x0  (Disk device)
6    1f0bef  0         5006016330604365 50060160b0604365 0x0  (Disk device)
7    1f19ef  0         5006016b30604365 50060160b0604365 0x0  (Disk device)
8    1f0e00  0         210100e08ba6a1cc 200100e08ba6a1cc 0x1f (Unknown Type,Host Bus Adapter)


# prtpicl -v
.
.
                 SUNW,qlc (scsi-fcp, 7f0000066b)   <--- br="" get="" go="" model="" nbsp="" number="" qlogic="" to="" website="">                  :_fru_parent   (7f0000dc86H)
                  :DeviceID      0x1
                  :UnitAddress   1
                  :vendor-id     0x1077
                  :device-id     0x2312
                  :revision-id   0x2
                  :subsystem-vendor-id   0x1077
                  :subsystem-id  0x10a
                  :min-grant     0x40
                  :max-latency   0
                  :cache-line-size       0x10
                  :latency-timer         0x40

.
.


#### The subsystem-ID value determines the model of HBA. #### For reference table Click Here

Configuring NEW LUNs:

spdma501:# format < /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
       1. c1t1d0
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
Specify disk (enter its number):


spdma501:# cfgadm -o show_FCP_dev -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             fc-private   connected    configured   unknown
c1::2100000c506b2fca,0         disk         connected    configured   unknown
c1::2100000c506b39cf,0         disk         connected    configured   unknown
c3                             fc-fabric    connected    unconfigured unknown
c3::50060482ccaae5a3,61        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,62        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,63        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,64        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,65        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,66        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,67        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,68        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,69        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,70        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,71        disk         connected    unconfigured unknown
c3::50060482ccaae5a3,72        disk         connected    unconfigured unknown
c4                             fc           connected    unconfigured unknown
c5                             fc-fabric    connected    unconfigured unknown
c5::50060482ccaae5bc,61        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,62        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,63        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,64        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,65        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,66        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,67        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,68        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,69        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,70        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,71        disk         connected    unconfigured unknown
c5::50060482ccaae5bc,72        disk         connected    unconfigured unknown
c6                             fc           connected    unconfigured unknown


spdma501:# cfgadm -c configure c3
Nov 16 17:32:25 spdma501 last message repeated 54 times
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47 (ssd3):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46 (ssd4):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45 (ssd5):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44 (ssd6):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43 (ssd7):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42 (ssd8):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41 (ssd9):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40 (ssd10):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f (ssd11):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e (ssd12):
Nov 16 17:32:26 spdma501        corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d (ssd13):

spdma501:# cfgadm -c configure c5
Nov 16 17:32:55 spdma501 last message repeated 5 times
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48 (ssd14):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47 (ssd15):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46 (ssd16):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45 (ssd17):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44 (ssd18):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43 (ssd19):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,42 (ssd20):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,41 (ssd21):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,40 (ssd22):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3f (ssd23):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3e (ssd24):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3d (ssd25):
Nov 16 17:32:59 spdma501        corrupt label - wrong magic number


spdma501:# format < /dev/null
Searching for disks...Nov 16 17:33:04 spdma501 last message repeated 1 time
Nov 16 17:33:07 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2):
Nov 16 17:33:07 spdma501        corrupt label - wrong magic numberdone

c3t50060482CCAAE5A3d61: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d62: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d63: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d64: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d65: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d66: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d67: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d68: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d69: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d70: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d71: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d72: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd67: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd68: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd69: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd70: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd71: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd72: configured with capacity of 17.04GB


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
       1. c1t1d0
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
       2. c3t50060482CCAAE5A3d61
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d
       3. c3t50060482CCAAE5A3d62
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e
       4. c3t50060482CCAAE5A3d63
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f
       5. c3t50060482CCAAE5A3d64
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40
       6. c3t50060482CCAAE5A3d65
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41
       7. c3t50060482CCAAE5A3d66
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42
       8. c3t50060482CCAAE5A3d67
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43
       9. c3t50060482CCAAE5A3d68
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44
      10. c3t50060482CCAAE5A3d69
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45
      11. c3t50060482CCAAE5A3d70
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46
      12. c3t50060482CCAAE5A3d71
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47
      13. c3t50060482CCAAE5A3d72
          /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48
      14. c5t50060482CCAAE5BCd67
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43
      15. c5t50060482CCAAE5BCd68
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44
      16. c5t50060482CCAAE5BCd69
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45
      17. c5t50060482CCAAE5BCd70
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46
      18. c5t50060482CCAAE5BCd71
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47
      19. c5t50060482CCAAE5BCd72
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48
Specify disk (enter its number):

If you don't see the new LUNs in format, run devfsadm !!!!

# /usr/sbin/devfsadm

Label the new disks !!!!

# cd /tmp


# cat format.cmd
label
quit


# for disk in `format < /dev/null 2> /dev/null | grep "^c" | cut -d: -f1`
do
  format -s -f /tmp/format.cmd $disk
  echo "labeled $disk ....."
done


Solaris/VxVM - "vxdisk list" shows status of diskgroup as 'online dgdisabled'.

Problem

What to do when "vxdisk list" shows status of  diskgroup as'online dgdisabled'.

Solution


DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2      sliced     disk03         rootdg         online
c1t12d0s2    sliced     disk12         raid5dg        
online dgdisabled
c1t13d0s2    sliced     disk13         raid5dg        online dgdisabled
c1t14d0s2    sliced     disk14         raid5dg        online dgdisabled
c1t15d0s2    sliced     disk15         raid5dg        online dgdisabled

This situation can happen when every disk in a disk group is lost from a bad power supply, power turned off to the disk array, cable disconnected, etc.
This can also occur when a disk group consists of only simple and/or nopriv disks and is changed to the enclosure-based naming scheme with VERITAS Volume Manager (VxVM) 3.2.
The correction for this is explained in the VxVM 3.2 System Administrator's Guide, section 'Simple/Nopriv Disks in Non-Root Diskgroups'.

The disk group will not show in the output from vxprint -ht.

The disk group will show as disabled in vxdg list:

NAME         STATE           ID
rootdg       enabled  957541872.1025.scrollsaw
raid5dg      
disabled 960304056.1215.scrollsaw

This is the output of vxdg list raid5dg:

Group:     raid5dg
dgid:      960304056.1215.scrollsaw
import-id: 0.1214
flags:    
disabled
version:   0
copies:    nconfig=default nlog=default
config:    seqno=0.1052 permlen=1162 free=1154 templen=4 loglen=176
config disk c1t12d0s2 copy 1 len=1162 state=iofail failed
      config-tid=0.1052 pending-tid=0.1052
      Error: error=Disk write failure
config disk c1t13d0s2 copy 1 len=1162 state=iofail failed
      config-tid=0.1052 pending-tid=0.1052
      Error: error=Disk write failure
config disk c1t14d0s2 copy 1 len=1162 state=iofail failed
      config-tid=0.1052 pending-tid=0.1052
      Error: error=Disk write failure
config disk c1t15d0s2 copy 1 len=1162 state=iofail failed
      config-tid=0.1052 pending-tid=0.1052
      Error: error=Disk write failure
log disk c1t12d0s2 copy 1 len=176 invalid
log disk c1t13d0s2 copy 1 len=176 invalid
log disk c1t14d0s2 copy 1 len=176 invalid
log disk c1t15d0s2 copy 1 len=176 invalid

Once power to the disk has been restored, VxVM still will not see the disk group, but thinks the disk group is imported:

root@scrollsaw# vxvol start raid5vol
vxvm:vxvol: ERROR: raid5vol: Not in any imported disk group
root@scrollsaw# vxdg import raid5dg
vxvm:vxdg: ERROR: Disk group raid5dg: import failed: Disk group exists and is imported
 

This can be remedied by deporting, then importing the disk group:

vxdg deport raid5dg
vxdg import raid5dg

The disk group now shows in vxprint -ht with the volume and plexes disabled:

dg raid5dg      default      default  79000    960304056.1215.scrollsaw

dm disk12       c1t12d0s2    sliced   1599     17910400 -
dm disk13       c1t13d0s2    sliced   1599     17910400 -
dm disk14       c1t14d0s2    sliced   1599     17910400 -
dm disk15       c1t15d0s2    sliced   1599     17910400 -


v  raid5vol     raid5        DISABLED ACTIVE   409600   RAID      -
pl raid5vol-01  raid5vol    
DISABLED ACTIVE   409600   RAID      2/32     RW
sd disk12-01    raid5vol-01  disk12   0        409600   0/0       c1t12d0  ENA
sd disk13-01    raid5vol-01  disk13   0        409600   1/0       c1t13d0  ENA
pl raid5vol-02  raid5vol    
DISABLED LOG      1600     CONCAT    -        RW
sd disk14-01    raid5vol-02  disk14   0        1600     0         c1t14d0  ENA
pl raid5vol-03  raid5vol    
DISABLED LOG      1600     CONCAT    -        RW
sd disk15-01    raid5vol-03  disk15   0        1600     0         c1t15d0  ENA

Now the volume can be started:

vxvol start raid5vol

v  raid5vol     raid5        
ENABLED  ACTIVE   409600   RAID      -
pl raid5vol-01  raid5vol    
ENABLED  ACTIVE   409600   RAID      2/32     RW
sd disk12-01    raid5vol-01  disk12   0        409600   0/0       c1t12d0  ENA
sd disk13-01    raid5vol-01  disk13   0        409600   1/0       c1t13d0  ENA
pl raid5vol-02  raid5vol    
ENABLED  LOG      1600     CONCAT    -        RW
sd disk14-01    raid5vol-02  disk14   0        1600     0         c1t14d0  ENA
pl raid5vol-03  raid5vol    
ENABLED  LOG      1600     CONCAT    -        RW
sd disk15-01    raid5vol-03  disk15   0        1600     0         c1t15d0  ENA
 


Note: You may need to verify that there are no PIDs accessing the file systems associated to the disk group that is disabled.  If there are processes that are still pending on these volumes, you may need to stop or kill the PIDs or, if using Solaris 8 and either ufs files system or VxFS 3.4+patch02, force umount the file systems. Refer to the Man page for umount.

Solaris/VXVM: Starting VXVM Vols that are in “DISABLED RECOVER” state

When a system encounters a problem with a volume or a plex, or if Veritas Volume Manager (VxVM) has any reason to believe that the data is not synchronized, VxVM changes the kernel state, KSTATE and state, STATE, of the volume and its plexes accordingly. The plex state can be stale, empty, nodevice, etc. A particular plex state does not necessarily mean that the data is good or bad. The plex state is representative of VxVM's perception of the data in a plex.

The output from the vxprint utility using the switches "-h" and "-t" (for more information about these switches and all applicable switches, see the man page for vxprint) displays information from records in VxVM disk group configurations, including the KSTATE and STATE of a volume and plex as indicated in columns 4 and 5 respectively in the table below. When viewing the configuration records of a VxVM disk group using the vxprint utility and the KSTATE and STATE fields display DISABLED ACTIVE for the volume and DISABLED RECOVER for the plex, recovery steps need to be followed to bring the volume back to an ENABLED ACTIVE state so it can be mounted and make the file system accessible again.

From the below output, it can be seen that the KSTATE and STATE for the volume test is DISABLED ACTIVE and its plex test-01 is DISABLED RECOVER.


# vxprint -ht -g testdg

DG NAME NCONFIG   NLOG    MINORS   GROUP-ID  
DM NAME DEVICE    TYPE    PRIVLEN  PUBLEN   STATE
RV NAME RLINK_CNT KSTATE  STATE    PRIMARY  DATAVOLS  SRL
RL NAME RVG       KSTATE  STATE    REM_HOST REM_DG    REM_RLNK
V  NAME RVG       KSTATE  STATE    LENGTH   USETYPE   PREFPLEX RDPOL
PL NAME VOLUME    KSTATE  STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME PLEX      DISK    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME PLEX      VOLNAME NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
            
             
dg testdg default default 84000 970356463.1203.alu    
              
dm testdg01 c1t4d0s2 sliced 2179 8920560 -  
dm testdg02 c1t6d0s2 sliced 2179 8920560 -  
              
v test -
DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test
DISABLED RECOVER 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA




Follow these steps to change KSTATE and STATE of a plex that is DISABLED RECOVER to ENABLED ACTIVE so the volume can be recovered / started and the file system mounted:

1. Change the plex test-01 to the DISABLED STALE state:

#vxmend -g  diskgroup fix stale


For example:

# vxmend -g testdg fix stale test-01


This output shows the plex test-01 as DISABLED STALE:

# vxprint -ht -g testdg
       
DG NAME NCONFIG   NLOG    MINORS   GROUP-ID  
DM NAME DEVICE    TYPE    PRIVLEN  PUBLEN   STATE
RV NAME RLINK_CNT KSTATE  STATE    PRIMARY  DATAVOLS  SRL
RL NAME RVG       KSTATE  STATE    REM_HOST REM_DG    REM_RLNK
V  NAME RVG       KSTATE  STATE    LENGTH   USETYPE   PREFPLEX RDPOL
PL NAME VOLUME    KSTATE  STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME PLEX      DISK    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME PLEX      VOLNAME NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
              
dg testdg default default 84000 970356463.1203.alu    
              
dm testdg01 c1t4d0s2 sliced 2179 8920560 -  
dm testdg02 c1t6d0s2 sliced 2179 8920560 -  
              
v test -
DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test
DISABLED STALE 17841120 CONCAT - RW
sd testdg01-01  test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01  test-01 testdg02 0 8920560 8920560 c1t6d0 ENA


2. Change the plex test-01 to the DISABLED CLEAN state:

vxmend -g diskgroup fix clean

For example:

# vxmend -g testdg fix clean test-01


This output shows the plex test-01 as DISABLED CLEAN:

# vxprint -ht -g testdg
      
DG NAME NCONFIG   NLOG    MINORS   GROUP-ID  
DM NAME DEVICE    TYPE    PRIVLEN  PUBLEN   STATE
RV NAME RLINK_CNT KSTATE  STATE    PRIMARY  DATAVOLS  SRL
RL NAME RVG       KSTATE  STATE    REM_HOST REM_DG    REM_RLNK
V  NAME RVG       KSTATE  STATE    LENGTH   USETYPE   PREFPLEX RDPOL
PL NAME VOLUME    KSTATE  STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME PLEX      DISK    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME PLEX      VOLNAME NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
              
dg testdg default default 84000 970356463.1203.alu    
              
dm testdg01 c1t4d0s2 sliced 2179 8920560 -  
dm testdg02 c1t6d0s2 sliced 2179 8920560 -  
              
v test -
DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test
DISABLED CLEAN 17841120 CONCAT - RW
sd testdg01-01  test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01  test-01 testdg02 0 8920560 8920560 c1t6d0 ENA


3. Start the volume test:

vxvol -g diskgroup start

For example:

# vxvol -g diskgroup start test

This output shows that the volume test and its plex test-01 are both ENABLED ACTIVE:

# vxprint -ht -g testdg
       
DG NAME NCONFIG   NLOG    MINORS   GROUP-ID  
DM NAME DEVICE    TYPE    PRIVLEN  PUBLEN   STATE
RV NAME RLINK_CNT KSTATE  STATE    PRIMARY  DATAVOLS  SRL
RL NAME RVG       KSTATE  STATE    REM_HOST REM_DG    REM_RLNK
V  NAME RVG       KSTATE  STATE    LENGTH   USETYPE   PREFPLEX RDPOL
PL NAME VOLUME    KSTATE  STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME PLEX      DISK    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME PLEX      VOLNAME NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
              
dg testdg default default 84000 970356463.1203.alu    
              
dm testdg01 c1t4d0s2 sliced 2179 8920560 -  
dm testdg02 c1t6d0s2 sliced 2179 8920560 -  
              
v test -
ENABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test
ENABLED ACTIVE 17841120 CONCAT - RW
sd testdg01-01  test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01  test-01 testdg02 0 8920560 8920560 c1t6d0 ENA



4. Mount the volume to its associated mount point (refer to the /etc/vfstab file if the mount point location is not known) if the file system is a Veritas File System (VxFS) file system:

mount -F vxfs /dev/vx/dsk/diskgroup/volume /mount-point

For example:

# mount -F vxfs /dev/vx/dsk/testdg/test /testvol


Note: An error may be generated stating that the file system needs to be checked for consistency. If this occurs, run the VxFS specific fsck utility (/usr/lib/fs/vxfs/fsck) where the default is to replay the intent log, instead of performing a full structural file system check which is usually sufficient to set the file system to CLEAN and allow the volume to be mounted