September 18, 2018

RHEV 4.2: How to run commands via vdsm client

Posted in Redhat Enterprise Virtualization at 3:03 pm by alessiodini


On RHV 4.2 can be useful to run commands via vdsm, especially during the troubleshooting. In my case I have a cluster where one hypervisor is in “Non Operational Status” due to a CPU missing flag inside the cluster.
On old releases I could use vdsClient command but now it has been replaced. It changed name and syntax:ย  now I can use vdsm-client command.

For example, How can I check CPU flags among the hypervisors? Easy to do!

# vdsm-client Host getCapabilities | grep -i flags
“cpuFlags”: “fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,sdbg,fma,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,lahf_lm,abm,3dnowprefetch,epb,cat_l3,cdp_l3,intel_pt,ibrs,ibpb,stibp,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,tsc_adjust,bmi1,hle,avx2,smep,bmi2,erms,invpcid,rtm,cqm,rdt_a,rdseed,adx,smap,xsaveopt,cqm_llc,cqm_occup_llc,cqm_mbm_total,cqm_mbm_local,dtherm,ida,arat,pln,pts,spec_ctrl,intel_stibp,model_n270,model_Broadwell-IBRS,model_coreduo,model_SandyBridge-IBRS,model_Nehalem,model_Haswell-noTSX,model_Westmere-IBRS,model_Broadwell-noTSX,model_Haswell-noTSX-IBRS,model_Nehalem-IBRS,model_SandyBridge,model_core2duo,model_IvyBridge,model_Penryn,model_IvyBridge-IBRS,model_Westmere,model_Haswell-IBRS,model_Broadwell-noTSX-IBRS,model_Haswell,model_Conroe,model_Broadwell”,

Let’s compare this ouput with the one from other hypervisors ๐Ÿ˜€

Advertisements

September 11, 2018

RHEV 4.2 metrics !!

Posted in Redhat Enterprise Virtualization tagged , , , , at 2:14 pm by alessiodini


From 4.2 release is possible to collect RHEV events and metrics from kibana.
The new metric store is not native, it must be manually installed. In my case I created a new guest inside RHEV and on top of it I installed the metric store. Basically it contains Elastic Search/Kibana running as container inside Openshift. Yes, you read well, Openshift! So, you will have a small Openshift running containing multiple containers. This is the installation guide

Following one screenshot of kibana examples dashboard:

August 17, 2018

RHEV 4.2 – Rhevm shell

Posted in Redhat Enterprise Virtualization at 8:22 am by alessiodini


From 4.2 release of Redhat Virtualization, rhevm shell is deprecated. You can still have the command but don’t use it because it “does not know” the new objects with new API version.
So, how can u deal and work about automation? The answer is: ansible!
This is a big opportunity for have fun using ansible. Where can you begin?
From the first, simple playbook:

- hosts: localhost
connection: local

tasks:
- name: Obtain SSO token
ovirt_auth:
url: "https://rhevm_url/ovirt-engine/api"
username: "admin@internal"
password: "insert_the_password"
insecure: "true"

- name: List vms
ovirt_vms_facts:
pattern: cluster=Default
auth: "{{ ovirt_auth }}"

- debug:
var: ovirt_vms

You have first to install ovirt-ansible-roles package, then you can run this playbook
It does not use external variables, it connects to rhevm via insecure mode, so this is the simpliest playbook you can use from rhevm to understand how it works.
This playbook will return each detail about vms running on rhevm. It’s extremely verbose but as I said this is just a start point ๐Ÿ™‚

Have fun!

August 14, 2018

Linux Redhat 7: How to clear boot directory

Posted in Linux at 9:54 am by alessiodini


Recently I noticed that multiple Vmware Linux templates had /boot filesystem used more than 90%
If you look on the web u will find a lot of solutions based just on removing kernel rpms. I disagree !
I began to clear the /boot directory removing the oldest kernels but this was not enough.

At this point you must go to /boot directory and look for rescue files:

# ls | grep rescue

initramfs-0-rescue-b20d7fe5b15140269ad2c2e51af4735e.img

vmlinuz-0-rescue-b20d7fe5b15140269ad2c2e51af4735e

initramfs-0-rescue-80405299bcbc4ebabf5827a44c193.img

vmlinuz-0-rescue-80405299bcbc4ebabf5827a44c193

initramfs-0-rescue-d58aadc169974f0ea93d637c046d764b.img

vmlinuz-0-rescue-d58aadc169974f0ea93d637c046d764b

Each rescue file does not belong to any rpm package, so you can manually delete the oldest pair files. In order I suggest you to follow this actions:

1) Look for a single rescue pair. If you want to know which kernel does belong to, you can run lsinitrd initram-0-rescue-<id>
2) Try to boot the system using the rescue from the previous point
3) If everything worked fine, you can boot with the latest kernel and delete each old pairs rescue files.
4) Lastly, update the GRUB2 configuration:ย  grub2-mkconfig -o /boot/grub2/grub.cfg

Thank to Paolo Fruci and Marco Simonetti for helping me dealing with this issue, we played together ๐Ÿ™‚

August 10, 2018

RHEV 4.2: How to check hosted-engine status via cli

Posted in Redhat Enterprise Virtualization tagged , , , , at 2:41 pm by alessiodini


Hosted engine is a special virtual machine inside RHEV architecture.
It has a dedicated command, hosted-engine.

How can I check via cli on which host does have the engine is running?
How can I check which hosts are valid candidates for hosting the engine?

The answer is:

[root@dcs-kvm01 ~]# hosted-engine –vm-status

–== Host 1 status ==–

conf_on_shared_storage : True
Status up-to-date : True
Hostname : dcs-kvm01.net
Host ID : 1
Engine status : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : caa0ce0d
local_conf_timestamp : 353495
Host timestamp : 353495
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=353495 (Fri Aug 10 16:34:24 2018)
host-id=1
score=3400
vm_conf_refresh_time=353495 (Fri Aug 10 16:34:25 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False

–== Host 2 status ==–

conf_on_shared_storage : True
Status up-to-date : True
Hostname : dcs-kvm02.net
Host ID : 2
Engine status : {“health”: “good”, “vm”: “up”, “detail”: “Up”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : b1f91d1e
local_conf_timestamp : 351112
Host timestamp : 351112
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=351112 (Fri Aug 10 16:34:20 2018)
host-id=2
score=3400
vm_conf_refresh_time=351112 (Fri Aug 10 16:34:20 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False

In my case the hosted engine is running on top of dcs-kvm02 host. Another useful information is the Score: each host has different metrics ( ping to default gw, filesystems, cpu load, etc.etc. ) and each metric has a score. Summing each score u can reach the best result, 3400. This score means the host is perfect for host the engine.
In this case you can also understand that this cluster is composited by 2 hypervisors.

August 7, 2018

First RHEV 4.2 up & running!

Posted in Redhat Enterprise Virtualization tagged , , , , at 1:43 pm by alessiodini


Yesterday I installed and configured RHEV 4.2 new infrastructure, I was so funny!

I installed a couple of hypervisors on DELL M630 blades and rhevm via hosted engine. I saw the new deploy via ansible, it’s more simple and fast than the last one. I also appreciated the cleanup utility after a hosted engine deploy failed. I remember in 4.0, in the same condition I had to manually look for written files and to clean them. It was painful to deal with vdsm configuration files.

I also saw ansible adding the second hypervisor, wow!!
It seems more stable and robust than the previous minor releases. Next step in the project will be to move virtual machines from Vmware and make multiple stress tests.

I was surprised connecting for the first time to the new dashboard, it remembers me Cloudforms ๐Ÿ™‚

This is the new dashboard

July 26, 2018

PDU connection

Posted in Uncategorized at 8:01 am by alessiodini


For an unexpected power outage the customer asked me to connect on a PDU looking for log events. I never connected to a PDU and actually I did not use serial cable from 5-6 years, so I was very very funny!!

I connected via serial cable and I got what I looked for using CTRL-L ๐Ÿ™‚

pdu final

July 16, 2018

Got Certified!!!

Posted in OpenStack at 8:56 am by alessiodini


I passed the EX210 exam obtaining the Redhat Openstack certification!! Another little step to Redhat Cloud Architect, -2 exams to my goal!
Now a little break, then I’ll begin to study Openshift certification ๐Ÿ™‚

๐Ÿ˜€

June 21, 2018

Linux Redhat 6.10 released!!

Posted in Linux at 1:43 pm by alessiodini


Two days ago RHEL 6.10 was officially released.
Reading from the web I see new interesting features:

“This release also includes a Red Hat Enterprise Linux 6.10 base image
to help enterprises more easily migrate Red Hat Enterprise Linux 6
workloads into container-based applications. These cloud-native
workloads can then be deployed and maintained on a more modern
platform, including Red Hat Enterprise Linux 7, Red Hat Enterprise
Linux Atomic Host, and Red Hat OpenShift Container Platform.

To make it easier for customers to plan their migration to Red Hat
Enterprise Linux 7, Red Hat Enterprise Linux 6.10 provides updates to
the Pre-upgrade Assistant, Red Hat Upgrade Tool, and the accompanying
documentation. Learn more about the upgrade process and how to access”

I never migrated before a major RHEL release, can’t wait to play with ๐Ÿ™‚

May 31, 2018

RHEV vProtect backup

Posted in Redhat Enterprise Virtualization at 10:43 am by alessiodini


Today I read on linkedin about this project letting customers to have a software backup planned and designed for RHV/Ovirt platfoms.

From the description I read:

#vProtect 3.4+ (code named MARS) delivers even more for the efficient and scalable RHV backup and recovery. Now vProtect also supports RHV/oVirt API v4 and introduces a new backup mode, eliminating the need to use export storage domain.

Marcin Kubacki, Storware and @Jacek Skรณrzyล„ski, Red Hat will discussed in details:

โ€“ RHV backup/restore via export storage domain
โ€“ RHV backup/restore with VM proxy/disk attachment
โ€“ Mount backups and file-level restore
โ€“ Integration with 3rd party backup providers, such as IBM #SpectrumProtect, Dell EMC #NetWorker, #DataDomain, Veritas Technologies LLC #NetBackup and Amazon Web Services S3
โ€“ Orchestration with Open #API

Can’t wait to play with vProtect!

Next page