September 20, 2010

clvxvm_impl: “vxencap” failed for root disk.

Posted in Sun Cluster at 11:17 am by alessiodini

Recently i did boot disk encapsulation with Sun Cluster and VxVM.

Follow the details:

# clxvm encapsulate -v

Current major number of vxio driver is consistent across cluster nodes.
Successfully completed initialization.
Verifying encapsulation requirements.
Arranging for Volume Manager encapsulation of the root disk.
Reinitialized the volboot file…
clvxvm_impl: “vxencap” failed for root disk.
clvxvm_impl: The log file for this run is created at : /var/cluster/logs/install/clvxvm.log.5125.

-_-”

# cat /var/cluster/logs/install/clvxvm.log.5125

clvxvm_impl -e -v

Current major number of vxio driver is consistent across cluster nodes.
Successfully completed initialization.
Verifying encapsulation requirements.
Arranging for Volume Manager encapsulation of the root disk.
Reinitialized the volboot file…
clvxvm_impl: “vxencap” failed for root disk.

I checked /etc/vx/volboot file , it was ok.
After some analsys Gianluca reminded me to check if the other disk ( placed in rootdg before it was deteled ) still had private & public region

# vxdisk -e list

DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME
[…]
Disk_0 auto – – online c6t2000002162DD411Bd0s2
Disk_3 auto – – online c6t2000002162FB2342d0s2

I erased both regions on Disk_0 but i forgot to do this even on Disk_3 !
Later i ran again the command:

# clxvm encapsulate -v

Current major number of vxio driver is consistent across cluster nodes.
Successfully completed initialization.
Verifying encapsulation requirements.
Arranging for Volume Manager encapsulation of the root disk.
Reinitialized the volboot file…
The setup to encapsulate root disk is complete…
Updating /global/.devices entry in /etc/vfstab.
This node will be re-booted in 20 seconds.
Type Ctrl-C to abort ………………..Sep 16 17:55:51 node71 reboot: rebooted by root
Sep 16 17:55:51 node71 Cluster.RGM.rgmd: fatal: received signal 15
Sep 16 17:55:51 node71 cl_eventlogd[3464]: Going down on signal 15.
Sep 16 17:55:51 node71 rpcbind: rpcbind terminating on signal.
Sep 16 17:55:51 node71 : clexecd: Going down on signal 15.
Sep 16 17:55:51 node71 Cluster.Framework: clexecd: Going down on signal 15.
Sep 16 17:55:51 node71 Cluster.RGM.zonesd: fatal: received signal 15
Terminated
Sep 16 17:55:51 node71 syslogd: going down on signal 15
Terminated
root@node71 # syncing file systems… done
WARNING: CMM: Node being shut down.
rebooting…
Resetting …

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: