Linux ACL & standard permission question


After long time , yesterday I played a bit with ACL on Redhat 6.7 nodes.
Doing some experiment I saw a strange thing:


[root@node1 ~]# useradd pippo
[root@node1 ~]# touch alessio
[root@node1 ~]# chmod 400 alessio
[root@node1 ~]# setfacl -m u:pippo:rwx alessio

[root@node1 ~]# getfacl alessio
# file: alessio
# owner: root
# group: root
user::r--
user:pippo:rwx
group::---
mask::rwx
other::---

[root@node1 ~]# chmod 600 alessio
[root@node1 ~]# getfacl alessio
# file: alessio
# owner: root
# group: root
user::rw-
user:pippo:rwx #effective:---
group::---
mask::---
other::---

This could be a noob question… but why acl changes from rwx to —?
I suppose there is a relationship between ACL configured and those initial standard permissions.
I need to know more about it! 🙂

Redhat Openstack 7 Undercloud Installation Bug


In these days I’m working with Redhat Openstack 6/7 versions.

When I install the undercloud component i got this error:

WARNING: keystoneclient.auth.identity.generic.base Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
WARNING: keystoneclient.auth.identity.generic.base Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
WARNING: keystoneclient.auth.identity.generic.base Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
ERROR: openstack Could not determine a suitable URL for the plugin
+ openstack role create ResellerAdmin
WARNING: keystoneclient.auth.identity.generic.base Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
WARNING: keystoneclient.auth.identity.generic.base Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
ERROR: openstack Could not determine a suitable URL for the plugin
[2015-09-19 19:58:29,620] (os-refresh-config) [ERROR] during post-configure phase. [Command ‘[‘dib-run-parts’, ‘/usr/libexec/os-refresh-config/post-configure.d’]’ returned non-zero exit status 1]

[2015-09-19 19:58:29,621] (os-refresh-config) [ERROR] Aborting…
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py”, line 526, in install
_run_orc(instack_env)
File “/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py”, line 461, in _run_orc
_run_live_command(args, instack_env, ‘os-refresh-config’)
File “/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py”, line 297, in _run_live_command
raise RuntimeError(‘%s failed. See log for details.’, name)
RuntimeError: (‘%s failed. See log for details.’, ‘os-refresh-config’)
ERROR: openstack Command ‘instack-install-undercloud’ returned non-zero exit status 1

I read a lot of times the official documentation and I looked on Redhat Solutions but I got stuck here.
With some analisys I found that if I add the netmask on undercloud_public_vip and undercloud_admin_vip directives the script uses the ip with the whole netmask:
( undercloud.conf)

undercloud_public_vip = 172.16.111.2/24
undercloud_admin_vip = 172.16.111.3/24

(…)
++ export OS_AUTH_URL=https://172.16.111.2/24:13000/v2.0
++ OS_AUTH_URL=https://172.16.111.2/24:13000/v2.0
++ hiera controller_public_vip
+ REGISTER_SERVICE_OPTS=’-p 172.16.111.2/24′
++ hiera controller_public_vip
+ INIT_KEYSTONE_OPTS=’-s 172.16.111.2/24′

Thats’ why the command gets an error!!!

If I don’t write the netmasks the command will configure both ip with /32 netmask!!!!

I updated the Redhat bug 1251271, I’m curious to see what Redhat will reply :))

Redhat Ceph Storage


From a couple of weeks I’m playing with Redhat Ceph Storage, experimenting the cluster and some feature but at the moment I need to reinstall all the systems because I destroyed them!! 🙂

I’m also interested in adding Ceph to Openstack and play with both together.
Starting from next week I will have 6 physical systems available for this purpose… let’s see what will happen! 🙂

Redhat on DL380P G8: Illegal OpCode error


Recently I installed a RHEL 5.5 on HP DL380P G8.
I made a standard installation ( network , lvm , filesystems, packages ) and rebooting the system I got a red screen with error “illegal_opcode
I never faced this issue and I was thinking “Why??”
I was in front of the rack without my notebook and internet , so I called some colleague and they helped me looking on the web. After 1 hour I made some test without success and with a colleague we find a solution. Before the installation we made 2x volumes via raid hardware. The issue was with these 2 volumes because during the installation for some reason, Linux saved data on one disk and the GRUB on the other.
So, if u face this issue don’t panic!!
Just:

1) Boot in rescue mode and let it mount the filesystems
2) # df -h | grep boot ( remember the device and the mounted fs name )
2a) # umount -l ( in my case /mnt/sysimage/boot )
3) # chroot /mnt/sysimage
4) # mount /boot
5) # /sbin/grub-install ( in my case grub-install /dev/cciss/c0d1 )

Leave the chroot and reboot.
Good luck 🙂

Got Certified!!!


Today I got the results from EX300 exam , and I passed!!
After one month of hard studying and testing , I’m now RHCE! 😀
The code number is 130-103-989

Redhat question: can I retrieve physical system associated to a specific subscrition?


Yesterday I made an analsysis about the subscriptions on Redhat Network: the customer asked me:

“Which subscriptions are expired?”
“Which Subscriptions will expire soon?”
“Which systems will be out of RHN access?”

I have studied a lot and I figured this:

Each system registered with RHN Classic method can’t be directly associated with a specific subscription. This is confirmed in this official note from Redhat.

So how is possible to retrieve these info? The solution for new systems is to register them with Subscription Manager ( a new method ) and for the old systems migrate each subscription from RHN Classic to Subscription Manager! The migration can be done with tools installed from
subscription-manager-* packages

Planning for RHCE


In these days I’m studying a lot for take the RHCE exam. My goal? At the moment i want to be RHCA soon as I can 🙂
I hope to work with Linux for another year at least . After RHCE i would like to take the Redhat Cluster exam , because i took the course and I have all the material .
Let’s see what will I do then !

Linux Redhat: How to see wich physical card is eth0 or eth1 ?


Yesterday i had to see which physical card was eth0 or eth1 for a suspect fault.
I was looking how to identify the correct card from Linux , it seems this is not an easy task 🙂
I ran lspci , I checked files under /sys , i got a lot of info but I still was not sure of that interface.
Looking on the web I found an interesing option of ethtool command:

ethtool -p eth2 10 ( it let blink eth2 interface for 10 seconds )

It worked for me 🙂
This is the thread: http://magazine.redhat.com/2007/09/05/tip-from-an-rhce-which-interface-is-eth0/

Heartbeat: heartbeat[5855]: 2013/08/13_20:58:15 ERROR: Cannot rexmit pkt 133 seqno too low


Recently I worked on a two nodes cluster issue.
The software was : CentOS 5 + heartbeat + DRBD + Pacemaker

Due to some issue I rebooted one node and tried to speak with the other via heartbeat/crm manual commands.
After this I saw these strange messages ( never faced before ):

heartbeat[5855]: 2013/08/13_20:58:15 ERROR: Cannot rexmit pkt 133 for system04: seqno too low

Looking on the documentation/web I did not found any solution or tip related to this message.
I verified that network connection was active and not low. No collisions , right speed and duplex , everything was ok. I tried first to stop heartbeat on one node and the messages did not stop. I halted the node then and nothing changed!!!!

After other tests I resolved rebooting both nodes.
This is bad solution, but I usually see heartbeat/DRBD/Pacemaker are not stable. I can understand with DRBD there is no need to buy any storage but it’s better have even a little DAS and use CentOS native Cluster Suite!