November 27, 2017

Solaris QFS

Posted in Solaris at 3:28 pm by alessiodini


Recently the customer I’m working for, asked to support him for a storage refresh project.
Host side he has multiple Vmware farms, Linux systems and three Solaris clusters running:

  • Solaris 10 on Sparc
  • QFS shared filesystem
  • Oracle RAC

I sincerely forgot tons of things about Solaris but I was happy to run “clq status” again, it was exciting 🙂

I also have the opportunity to play with QFS, I never saw it before. I’m dealing with an old version but I can’t wait to play more with sam* commands!!
I finally understand mcf file syntax and hosts file under /etc/opt/SUNWsamfs directory.
At same time I’m dealing with SRDF tasks, and I need to lear more about storage, EMC VMAX in this case 😀

Advertisements

September 12, 2017

Solaris and SPARC are dead. No more words

Posted in Solaris at 10:27 am by alessiodini


Today I’m so depressed about this…
https://www.networkworld.com/article/3222707/data-center/the-sun-sets-on-solaris-and-sparc.html

My young dream was to work for Oracle company and with SPARC/Solaris environments.
Bye bye… 😦

July 15, 2016

Oracle S7 Processor!

Posted in Solaris at 9:06 am by alessiodini


Yesterday I have been in Oracle, following a workshop about new Oracle S7 and Minicluster systems.
I was impressed about the security focus from Oracle about each technical scenario ( Databse , Analytics , OLTP , … ) and the performances! They used multiple open source benchmark tools , and currently they have a super power compared to actual more powerful 86 processors.

Can’t wait about to work on these new systems !!:D

March 3, 2016

How to check GRUB installation on a specific disk under Linux or Solaris

Posted in Solaris tagged , , at 10:51 am by alessiodini


It’s long time from my last post,
I worked a lot as LPIC traineer , with Redhat Cloud products and I sometimes help my colleagues with Redhat and Solaris patching tasks.

Due to patching tasks I tought about how to check if a specific disk does have GRUB installed?
For what I know there is not any command , so I studied and looking on Internet I found these commands:

Redhat box: ( with /dev/sda as example )
# dd bs=512 count=1 if=/dev/hda | od -Ax -tx1z -v

Solaris box: ( with c0t0d0 as example )
# dd bs=512 count=1 if=/dev/rdsk/c0t0d0s2 | od -c

This is nice!!
I also played on Solaris box with installgrub command , checking the disk before and after the GRUB installation 🙂

Why should I verify this? Because I love to know more as possible I can about any task I do 🙂
Have fun 😀

May 6, 2015

Solaris 10: svc:/system/sysidtool:net offline* transitioning to online

Posted in Solaris at 12:24 pm by alessiodini


Recently I patched a Solaris 10 system with few local zones.
After attach phase, I booted the zones and checked their status with svcs -xv command.
I saw something strange… the manifest svc:/system/sysidtool:net was in the following state:

offline* transitioning to online

I tried to wait a couple of minutes but nothing changed!!
After some investigation I found that there was the file /root/etc/.UNCONFIGURED , so I figured that if that file exist the manifest waits a user configuration ( via zlogin -C ) or it will be offline forever !!

So, the correct patching procedure for avoid this issue is:

– detach the zone
– attach the zone via zoneadm attach -u
– check and remove /root/etc/.UNCONFIGURED file
– boot the zone

Another solution could be make a configuration via zlogin console ( It’s the post-install configuration where u must insert hostname, network, etc.etc. )

Happy Patching 🙂

April 7, 2015

Oracle T5-2 server: how to make hardware raid

Posted in Solaris at 10:37 am by alessiodini


Make an hardware RAID on T5-2 server is very fun 🙂
The first important thing is to locate which disk must be used for the raid.
If you have more than two disks on the server you have the question: “which disk I’m selecting for the raid?”

It’s simple to answer this question. Just following the official ORACLE documentation:

0) Go to OK prompt
1) Get the physical devices list

ok show-devs

/pci@300/pci@1/pci@0/pci@4/scsi@0

/pci@340/pci@1/pci@0/pci@2/scsi@0

2) Select a controller

ok select /pci@300/pci@1/pci@0/pci@4/scsi@0

3)Go in front of the server and you will see the disks blinking!!! So it’s easy to identify which disks we are using.
4) Get the targets

{0} ok show-children

FCode Version 1.00.63, MPT Version 2.00, Firmware Version 14.00.00.00

Target 9
Unit 0 Disk HITACHI H109030SESUN300G A690 585937500 Blocks, 300 GB
SASDeviceName 5000cca0546f7da4 SASAddress 5000cca0546f7da5 PhyNum 0
Target a
Unit 0 Removable Read Only device TEAC DV-W28S-A 9.2A
SATA device PhyNum 3
Target b
Unit 0 Disk HITACHI H109030SESUN300G A690 585937500 Blocks, 300 GB
SASDeviceName 5000cca0546f5d48 SASAddress 5000cca0546f5d49 PhyNum 1

5) Make the raid ( in my case I maid raid-1 ) between the targets

{0} ok 9 b create-raid1-volume
Target 9 size is 583983104 Blocks, 298 GB
Target b size is 583983104 Blocks, 298 GB
The volume can be any size from 1 MB to 285148 MB
What size do you want? [285148]
Volume size will be 583983104 Blocks, 298 GB
Enter a volume name: [0 to 15 characters] boot
Volume has been created

6) Check the volume info

{0} ok show-volumes
Volume 0 Target 143 Type RAID1 (Mirroring)
Name boot WWID 00de54d908dba3cb
Optimal Enabled Background Init In Progress
2 Members 583983104 Blocks, 298 GB
Disk 0
Primary Optimal
Target 9 HITACHI H109030SESUN300G A690 PhyNum 0
Disk 1
Secondary Optimal
Target b HITACHI H109030SESUN300G A690 PhyNum 1

7) Unselect the controller ( disks are still blinking! )

unselect-dev /pci@300/pci@1/pci@0/pci@4/scsi@0

At this point you can boot the server and watch the device via format command.
For the maintenance you can manage the volumes directly from Solaris! This is due to sas2ircu utility.

April 2, 2015

Oracle T5-2 installation with Solaris 11.1 and LDOMS

Posted in Solaris at 8:46 am by alessiodini


I recently worked 3 days for a new customer.
I installed a T5-2 server with Solaris 11.1 and I made a couple of LDOMS with Solaris 10 for disaster recovery purpose.
I must thank my friend Michele Vecchiato because he helped me to get old Solaris 10 iso and old patch bundle. Thank you Michele :))

January 31, 2015

Solaris 10: How to upgrade from u3 to u11 and how to migrate UFS root filesystem to ZFS filesystem on x86 platform

Posted in Solaris tagged , , , , , , , , , , at 5:04 pm by alessiodini


In these days I’m doing a lot of upgrade tasks on Solaris hosts.
Solaris upgrade envolves Veritas upgrade too, and it’s funny 🙂

Recently I wrote a procedure where I upgrade a Solaris 10u3 to Solaris 10u11 with Live Upgrade on x86 platform.
At same time I changed the root filesystem, migrating it from UFS to ZFS. It was a good “technology jump” 🙂
The procedure is here. I want to write one more for SPARC platform.
For any question or procedure improvement just let me know ! 🙂

Alessio.

July 14, 2014

Solaris 10: fcinfo output bug?

Posted in Solaris tagged , , , , at 10:57 am by alessiodini


In these days I’m envolved on a storage migration project.
Due to some analysis I made , I found that many Solaris 10 systems have a big difference between fcinfo and luxadm -e port commands.

For example , on a 4 paths system , I found that luxadm -e port gives me only 2 path with status CONNECTED.
Running fcinfo hba-port I see 4 ports with status ONLINE , instead of 2.
Why? I saw this on systems with 1 day uptime too.
Could it be fcinfo output bug?

July 8, 2014

Solaris 11.1 and SAN zoning issue

Posted in Solaris tagged , , , , , , at 2:00 pm by alessiodini


Scenario:
– Solaris 11.1
– OSC 4.1 ( two nodes cluster )
– LDOM with I/O domain configuration
– Brocade Switches
– EMC VNX 5200 storage

Recently I faced an issue where rebooting one cluster node, I lost LUN access from the other node.
For example, rebooting node “1” I saw on both nodes:

Jun 21 13:36:02 xxx fctl: [ID 517869 kern.warning] WARNING: fp(5)::GPN_ID for D_ID=a0300 failed
Jun 21 13:36:02 xxx fctl: [ID 517869 kern.warning] WARNING: fp(5)::N_x Port with D_ID=a0300, PWWN=21000024ff470d9e disappeared from fabric
Jun 21 13:36:02 xxx fctl: [ID 517869 kern.warning] WARNING: fp(8)::GPN_ID for D_ID=140300 failed
Jun 21 13:36:02 xxx fctl: [ID 517869 kern.warning] WARNING: fp(8)::N_x Port with D_ID=140300, PWWN=21000024ff470cb8 disappeared from fabric
Jun 21 13:36:44 xxx fctl: [ID 517869 kern.warning] WARNING: fp(5)::N_x Port with D_ID=a0300, PWWN=21000024ff470d9e reappeared in fabric
Jun 21 13:36:44 xxx fctl: [ID 517869 kern.warning] WARNING: fp(8)::N_x Port with D_ID=140300, PWWN=21000024ff470cb8 reappeared in fabric

As the messages said , both nodes got a PWWN disappear/reappear for 40 seconds.
We opened a case to EMC where they said us that our zoning configuration was not supported because there was a multiple initiators configured, instead of a single initiator.
After the new zoning configuration from storage administrator , everything was fine Solaris side 🙂

Next page