August 17, 2018

RHEV 4.2 – Rhevm shell

Posted in Redhat Enterprise Virtualization at 8:22 am by alessiodini


From 4.2 release of Redhat Virtualization, rhevm shell is deprecated. You can still have the command but don’t use it because it “does not know” the new objects with new API version.
So, how can u deal and work about automation? The answer is: ansible!
This is a big opportunity for have fun using ansible. Where can you begin?
From the first, simple playbook:

- hosts: localhost
connection: local

tasks:
- name: Obtain SSO token
ovirt_auth:
url: "https://rhevm_url/ovirt-engine/api"
username: "admin@internal"
password: "insert_the_password"
insecure: "true"

- name: List vms
ovirt_vms_facts:
pattern: cluster=Default
auth: "{{ ovirt_auth }}"

- debug:
var: ovirt_vms

You have first to install ovirt-ansible-roles package, then you can run this playbook
It does not use external variables, it connects to rhevm via insecure mode, so this is the simpliest playbook you can use from rhevm to understand how it works.
This playbook will return each detail about vms running on rhevm. It’s extremely verbose but as I said this is just a start point ๐Ÿ™‚

Have fun!

Advertisements

August 10, 2018

RHEV 4.2: How to check hosted-engine status via cli

Posted in Redhat Enterprise Virtualization tagged , , , , at 2:41 pm by alessiodini


Hosted engine is a special virtual machine inside RHEV architecture.
It has a dedicated command, hosted-engine.

How can I check via cli on which host does have the engine is running?
How can I check which hosts are valid candidates for hosting the engine?

The answer is:

[root@dcs-kvm01 ~]# hosted-engine –vm-status

–== Host 1 status ==–

conf_on_shared_storage : True
Status up-to-date : True
Hostname : dcs-kvm01.net
Host ID : 1
Engine status : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : caa0ce0d
local_conf_timestamp : 353495
Host timestamp : 353495
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=353495 (Fri Aug 10 16:34:24 2018)
host-id=1
score=3400
vm_conf_refresh_time=353495 (Fri Aug 10 16:34:25 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False

–== Host 2 status ==–

conf_on_shared_storage : True
Status up-to-date : True
Hostname : dcs-kvm02.net
Host ID : 2
Engine status : {“health”: “good”, “vm”: “up”, “detail”: “Up”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : b1f91d1e
local_conf_timestamp : 351112
Host timestamp : 351112
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=351112 (Fri Aug 10 16:34:20 2018)
host-id=2
score=3400
vm_conf_refresh_time=351112 (Fri Aug 10 16:34:20 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False

In my case the hosted engine is running on top of dcs-kvm02 host. Another useful information is the Score: each host has different metrics ( ping to default gw, filesystems, cpu load, etc.etc. ) and each metric has a score. Summing each score u can reach the best result, 3400. This score means the host is perfect for host the engine.
In this case you can also understand that this cluster is composited by 2 hypervisors.

August 7, 2018

First RHEV 4.2 up & running!

Posted in Redhat Enterprise Virtualization tagged , , , , at 1:43 pm by alessiodini


Yesterday I installed and configured RHEV 4.2 new infrastructure, I was so funny!

I installed a couple of hypervisors on DELL M630 blades and rhevm via hosted engine. I saw the new deploy via ansible, it’s more simple and fast than the last one. I also appreciated the cleanup utility after a hosted engine deploy failed. I remember in 4.0, in the same condition I had to manually look for written files and to clean them. It was painful to deal with vdsm configuration files.

I also saw ansible adding the second hypervisor, wow!!
It seems more stable and robust than the previous minor releases. Next step in the project will be to move virtual machines from Vmware and make multiple stress tests.

I was surprised connecting for the first time to the new dashboard, it remembers me Cloudforms ๐Ÿ™‚

This is the new dashboard

May 31, 2018

RHEV vProtect backup

Posted in Redhat Enterprise Virtualization at 10:43 am by alessiodini


Today I read on linkedin about this project letting customers to have a software backup planned and designed for RHV/Ovirt platfoms.

From the description I read:

#vProtect 3.4+ (code named MARS) delivers even more for the efficient and scalable RHV backup and recovery. Now vProtect also supports RHV/oVirt API v4 and introduces a new backup mode, eliminating the need to use export storage domain.

Marcin Kubacki, Storware and @Jacek Skรณrzyล„ski, Red Hat will discussed in details:

โ€“ RHV backup/restore via export storage domain
โ€“ RHV backup/restore with VM proxy/disk attachment
โ€“ Mount backups and file-level restore
โ€“ Integration with 3rd party backup providers, such as IBM #SpectrumProtect, Dell EMC #NetWorker, #DataDomain, Veritas Technologies LLC #NetBackup and Amazon Web Services S3
โ€“ Orchestration with Open #API

Can’t wait to play with vProtect!

February 13, 2018

RHEV 4.2 BETA

Posted in Redhat Enterprise Virtualization at 11:24 am by alessiodini


Can’t wait for play with RHEV 4.2 !
With 4x I began to work from 4.0 release and it was only pain, bugs, bugs, and again bugs. No performance graphics. After running yum update everything went bad.

I’m reading about 4.2 features and I hope now RHEV is more stable and it can gives at least basic graphic informations about the guests. I wait the official “stable” status and I will begin to play with.

Here you can find a description about the main new features.
Let’s have fun ๐Ÿ™‚

May 9, 2017

RHEV: How to connect to guest’s console without administration portal

Posted in Redhat Enterprise Virtualization at 11:26 am by alessiodini


Today I’m playing a bit with RHEV, and I was curious about how to connect to guests console without using web portal.
Looking on the web I found this link and it’s useful but at same time not that clear.

So, following the ticket method I was able to use the console without rhevm:

1) You need a system where u can use X. In my case I have a vm called nfs.ads.local
2) on nfs.ads.local:
# scp root@hypervisor:/etc/pki/vdsm/libvirt-spice/ca-cert.pem /root/ca-cert.pem
# yum install -y virt-viewer

3) on the hypervisor where the vm is running:
# vdsClient -s 0 list table | awk '{ print $1,$3 }'
b0718db7-308c-4631-8a34-dee367c984cf centos2
11329426-2b7e-4b02-a1de-534a745d7df5 centos1

# openssl x509 -in /etc/pki/vdsm/libvirt-spice/server-cert.pem -noout -text | grep Subject:
Subject: O=ads.local, CN=rhevh3.ads.local

# vdsClient -s 0 setVmTicket UUID PASSWORD VALIDITY
# vdsClient -s 0 setVmTicket b0718db7-308c-4631-8a34-dee367c984cf pippo 3600

4) Verify under RHEV which logical network is configured as “display”
5) Check on the hypervisor the port to connect with:

# ps -ef | grep centos2 ( look for tls-port argument )
(...)
-spice tls-port=5901,addr=172.16.16.106
(...)

From nfs.ads.local I run:
# remote-viewer --spice-ca-file=ca-cert.pem spice://rhevh3.ads.local?tls-port=5901 --spice-host-subject="O=ads.local, CN=rhevh3.ads.local" <– these fields are from the step 3!

The password is pippo, have fun!

April 4, 2017

RHEV and RHOSP Integration

Posted in Redhat Enterprise Virtualization at 4:09 pm by alessiodini


I’m working about RHEV 4.0.7 and Redhat Openstack 10 Integration.
My goals are:

1) To use external network from neutron connecting vms running on RHEV.
2) To apply openstack security groups rules to RHEV vms.

Actually it’s quite hard to reach each goal because I’m facing a lot of bugs.
Today I opened a couple of bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=1438874
https://bugzilla.redhat.com/show_bug.cgi?id=1438880

but there are more active bugs fixed in 4.1 or 4.2 release.
I was able to partially integrate RHV and RHOSP using RHEV 4.0.7 – RHOSP 7 and 8 but I faced more bugs and I’m trying to use the RHOSP 10 version.

When I will reach my goals I will share any detail about this task!
๐Ÿ˜€

November 9, 2016

RHEV 4.0 suspicious bug

Posted in Redhat Enterprise Virtualization at 10:36 am by alessiodini


During v2v I faced a supermin5 issue. Digging with libguestfs-test-tool I got this error:

supermin: failed to find a suitable kernel (host_cpu=x86_64).

I opened a bugzilla for RHEV 4.0.vhoping to help someone other that is doing the same tasks!
Let’s see how this will going on ๐Ÿ™‚

November 7, 2016

Redhat Cloudforms and Redhat Enterprise Virtualization

Posted in Redhat Enterprise Virtualization at 2:44 pm by alessiodini


Today I’m so funny !!
I installed with colleagues Redhat Enterprise Virtualization 4.0 and Redhat Cloudforms 4.1 letting them speak with each other, wow ๐Ÿ˜€
I’m feeling like a kid ๐Ÿ™‚

This is a basic screenshot from RHEV side.

September 6, 2016

ESX 5.5 and Ovirt 4.0.3 v2v

Posted in Redhat Enterprise Virtualization at 8:45 pm by alessiodini


In these days I’m working for a customer moving via v2v ESX guests to Ovirt infrastructure.
I installed and configured the Ovirt 4.0.3 release which is one week old!
I already opened a couple of bugzilla and I think they will be three soon.. I want first to repeat some test watching how to deal with a particular issue. I can’t browse ESX machine from Ovirt dashboard, the VDSM is unable to get some vmdk size and browsing does not work. Actually I’m only migrating guests with virt-v2v command, it’s pretty good!!!

I’m also playing a bit with virsh command, connecting via vpx to Vcenter and via esx directly to ESX physical host. Following an example:

Welcome to virsh, the virtualization interactive terminal.

Type: ‘help’ for help with commands
‘quit’ to quit

virsh # list
Id Name State
—————————————————-
29205 xyz running
31760 AdsAnalyzer running
32412 DB_GRA running
33647 WebSite running
34915 Debian_8 running
34918 Ospita running
34928 OpenSuse running
34930 vm04 running
34935 vm01 running

I’m so funny!! ๐Ÿ˜€

Next page