Openshift 3.1: Auto scaling feature is working!!

Today I’m so happy!
After lot of problems about metric,heapster,resource limits and pods I was finally able to test and watch autoscaling working in my Openshift!!
I’m so happy because I began with Openshift from 0, I didn’t know the architecture or the commands, nothing!!

Anyway, before I had one pod running and stressing it with ab tool I got:

{horizontal-pod-autoscaler } SuccessfulRescale New size: 10; reason: CPU utilization above target
( I configured a hpa with 10% CPU target utilization, and with ab I almost reached 100% CPU )

The number of pods changed from 1 to 10 during the time.

After I stopped ab test and I got:

{horizontal-pod-autoscaler } Normal SuccessfulRescale New size: 1; reason: All metrics below target

The number of pods changed from 10 to 1.

So funnyyyyy 😀

Openshift 3: Tag v1.2.0-rc1-13-g2e62fab bug

In these days I’m building and configuring Openshift 3 on top of OpenStack Kilo.
After the installation ( my scenario has 1x master , 1x infra , 2x nodes ) I’m not able to create router and registry services.

The error messages is:

failed to "StartContainer" for "POD" with ErrImagePull: "Tag v1.2.0-rc1-13-g2e62fab not found in repository

This is due to tag -rc1-13-g2e62fab that comes from oc version:

[root@my-openshift-origin-master-0 centos]# oc version
oc v1.2.0-rc1-13-g2e62fab
kubernetes v1.2.0-36-g4a3f9c5

Don’t panic!! The issue comes from project and after some analysis and ask questions to internet I founded how to solve this issue.

Following the steps:

1) Edit master-config.yaml and node-config.yaml on master node , changing the line:

format: openshift/origin-${component}:${version}
format: openshift/origin-${component}:v1.2.0

2) Edit node-config.yaml on each node repeating the step 1

3) On master node restart both components , in my case I run systemctl restart origin-master.service origin-node.service

4) Create the registry using the commands:
echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"registry"}}' | oc create -f -
oadm registry --credentials=/etc/origin/master/openshift-registry.kubeconfig --service-account=registry

5) Create the router using the commands:
echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
oadm router --credentials=/etc/origin/master/openshift-router.kubeconfig --images=openshift/origin-haproxy-router:v1.2.0 --service-account=router

Now you have the basic objects for deploy and expose any container u need!! 😀

Openshift 3 Origin Installation via heat templates

Today I was finally able to deploy Openshift 3 Origin on top of Openstack!!!
I spent a couple of days fighting with Redhat heat templates from

I deployed 1x master , 2x nodes and 1x infra host.

I got stucked few times , but the last one was very curious because I got an error from Ansible playbook , coming from main.yml file command. The TASK is the last one , called “Clean pods in DeadlineExceeded status”:

– hosts: masters[0]
sudo: yes
– name: Clean pods in DeadlineExceeded status
shell: oc get pod | grep DeadlineExceeded | cut -f 1 -d ” ” | xargs -r oc delete pod

The command and the syntax are correct , but for some reason on the virtual machine the command was:

oc get pod | grep DeadlineExceeded | cut -f 1 -d \” \” | xargs -r oc delete pod

This command does not return 0 because the syntax is wrong , so the ansible resource from heat got failed repeatedly
I modified the code in cut -f 1 -d “” and after I was able to complete the whole deploy with success!!!

Following the architecture deployed from Openstack and the dashboard:
opemshift map

openshift dashboard