Understanding OpenStack Heat Auto Scaling

Heat Autoscaling

OpenStack Heat can deploy and configure multiple instances in one command using resources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can even use Ceilometer to create alarms based on instance CPU usage and associate actions like spinning up or terminating instances based on CPU load.

All the above is done by Heat to provide autoscaling capabilities to our applications. In this post I explain how to do this in RHEL 7 instances. If you want to reproduce this in another OS it’s as simple as replacing how the example webapp packages are installed.

Steps to have Heat autoscaling

1. Create a WordPress repo in a RHEL 7 box. Make sure it’s a basic installation so that all the dependencies are downloaded along with WordPress:

# Install EPEL and Remi repos first, then create a repo
yum -y install http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
yum -y install http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
yum -y --enablerepo=remi install wordpress --downloadonly --downloaddir=/var/www/html/repos/wordpress
createrepo /var/www/html/repos/wordpress

2. Create a repo for rhel-7-server-rpms with something like:

# First register to Red Hat's CDN with subscription-manager register
# Then subscribe to the channels to be synchronised
reposync -p /var/www/html/repos/ rhel-7-server-rpms
createrepo /var/www/html/repos/rhel-7-server-rpms

3. Download the Heat template which consists on two files: autoscaling.yaml and lb_server.yaml 

Note: The template autoscaling.yaml uses lb_server.yaml (nested stacks?) and it can’t be deployed from Horizon right now due to a bug. It works fine from the command line as described below.

[Update] Note: I made it work in horizon by:

  • Publishing the two templates on a web server.
  • Modifying the autoscaling.yaml template published in the web server to call the nested template like this:
type: http://172.16.0.129:81/repos/heat-templates/lb_server.yaml

4. Modify the Heat template so the first thing it does when cloud-init executes the script passed via user_data by Heat is installing the WordPress repos.

a. Right before yum -y install httpd wordpress add the repos making it look like this:

 
[...]     
      user_data_format: RAW
      user_data:
        str_replace:
          template: |
            #!/bin/bash -v
            #Add local repos for wordpress and rhel7
            cat << EOF >> /etc/yum.repos.d/rhel.repo
            [rhel-7-server-rpms]
            name=rhel-7-server-rpms
            baseurl=http://172.16.0.129:81/repos/rhel-7-server-rpms
            gpgcheck=0
            enabled=1

            [wordpress]
            name=wordpress
            baseurl=http://172.16.0.129:81/repos/wordpress
            gpgcheck=0
            enabled=1
            EOF

            yum -y install httpd wordpress
[...]

b. And right before yum -y install mariadb mariadb-server do exactly the same.

Note: I’m assuming that your two repos are accessible via http from the instances.

Note: All of these steps are optional. If your instances pull packages directly from the Internet and/or another repository you can skip or adapt this to your environment.

5. Take note of:

  • The glance image you will use: nova image-list
    • Note: I’m using the RHEL 7 image available in the Red Hat Customer Portal  rhel-guest-image-7.0-20140618.1.x86_64.qcow2
  • The ssh key pair you want to use: nova keypair-list
  • The flavor you want to use with them: nova flavor-list
  • The subnet where the instances of the Heat stack will be launched on.

6. Create the Heat stack:

heat stack-create AutoscalingWordpress -f autoscaling.yaml \
-P image=rhel7 \
-P key=ramon \
-P flavor=m1.small \
-P database_flavor=m1.small \
-P subnet_id=44908b41-ce16-4f8c-ba6c-9bb4303e6d3f \
-P database_name=wordpress \
-P database_user=wordpress

Note: Here we use all the parameters from the template downloaded before. The are found in the parameters: section of the YAML file. We could add default: value within the template alternatively.

Now, what I do right after is a tail -f /var/log/heat/*log in the controller node, where I have Heat installed, just to make sure everything is fine with the creation of the heat stack.

7. Verify Heat created a LBaaS pool and VIP:

[root@racedo-rhel7-1 heat(keystone_demo)]# neutron lb-pool-list  
+--------------------------------------+----------------------------------------+----------+-------------+----------+----------------+--------+  
| id                                  | name                                  | provider | lb_method  | protocol | admin_state_up | status |  
+--------------------------------------+----------------------------------------+----------+-------------+----------+----------------+--------+  
| 78f02e89-aa07-40fd-917b-1481175b43e8 | AutoscalingWordpress-pool-46zb7elgzamo | haproxy  | ROUND_ROBIN | HTTP    | True          | ACTIVE |  
+--------------------------------------+----------------------------------------+----------+-------------+----------+----------------+--------+
[root@racedo-rhel7-1 heat(keystone_demo)]# neutron lb-vip-list  
+--------------------------------------+----------+-----------+----------+----------------+--------+  
| id                                  | name    | address  | protocol | admin_state_up | status |  
+--------------------------------------+----------+-----------+----------+----------------+--------+  
| 8da663cb-43d7-49af-9343-360431e02655 | pool.vip | 10.1.1.14 | HTTP    | True          | ACTIVE |  
+--------------------------------------+----------+-----------+----------+----------------+--------+  

8. Associate a floating IP to the VIP: neutron floatingip-associate FLOATING_IP_ID VIP_NEUTRON_PORT_ID. In my case I need a floating IP:

[root@racedo-rhel7-1 heat(keystone_demo)]# neutron lb-vip-show pool.vip | grep port_id
| port_id             | 13c01599-23f1-4e1e-96d9-72f2775e6183 |
[root@racedo-rhel7-1 heat(keystone_demo)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 0525f959-5213-4291-a1f0-a2ea2b40e11c |                  | 172.16.0.53         |                                      |
| 09f1bdc9-228b-4057-a5d1-3327ccc0bfc8 |                  | 172.16.0.54         |                                      |
| 5538961a-3423-46a3-9744-aba699e722c5 |                  | 172.16.0.52         |                                      |
+--------------------------------------+------------------+---------------------+-------------------------------------
root@racedo-rhel7-1 heat(keystone_demo)]# neutron floatingip-associate 0525f959-5213-4291-a1f0-a2ea2b40e11c 13c01599-23f1-4e1e-96d9-72f2775e6183

Note: This is optional if your instances are connected to a provider network where you can access directly instead of to a tenant network like in this example.

9. Verify that Heat created the two Ceilometer alarms; one to scale out on high CPU usage and another one to scale down on low CPU:

root@racedo-rhel7-1 heat(keystone_demo)]# ceilometer alarm-list
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+
| Alarm ID                            | Name                                            | State | Enabled | Continuous | Alarm condition                | Time constraints |
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+
| 1610f404-8df7-46ed-b131-6d3797fc9e4e | AutoscalingWordpress-cpu_alarm_low-vinrbn2rdjpx  | alarm | True    | False      | cpu_util < 15.0 during 1 x 600s | None            |
| 53c124bd-db57-4909-af55-009f5a635937 | AutoscalingWordpress-cpu_alarm_high-42dc5funjeds | ok    | True    | False      | cpu_util > 50.0 during 1 x 60s  | None            |
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+

10. Verify you can access the WordPress using the VIP:

Wordpress

11. Now ssh into the WordPress web instance (not the DB one) and put some CPU load, just a couple of dd commands will suffice. Add a floating IP to the instance first if necessary.

[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ dd if=/dev/zero of=/dev/null  &  
[1] 908  
[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ dd if=/dev/zero of=/dev/null  &  
[2] 909  
[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ dd if=/dev/zero of=/dev/null  &  
[3] 910  
[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ dd if=/dev/zero of=/dev/null  &  
[4] 911  
[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ dd if=/dev/zero of=/dev/null  &  
[5] 912  
[cloud-user@au-g6hl-ye4uglqb5t7r-ylpghgnzyck3-server-nobvg6ftaoe7 ~]$ top  
top - 11:01:05 up 10 min,  1 user,  load average: 6.81, 1.12, 0.71  
Tasks:  90 total,  8 running,  82 sleeping,  0 stopped,  0 zombie  
%Cpu(s): 24.3 us, 75.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st  
KiB Mem:  1018312 total,  235068 used,  783244 free,      688 buffers  
KiB Swap:        0 total,        0 used,        0 free.    95480 cached Mem  
  
  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM    TIME+ COMMAND  
  908 cloud-u+  20  0  107920    620    528 R 15.8  0.1  0:09.80 dd  
  909 cloud-u+  20  0  107920    616    528 R 15.5  0.1  0:08.49 dd  
  911 cloud-u+  20  0  107920    620    528 R 15.5  0.1  0:07.88 dd  
  912 cloud-u+  20  0  107920    616    528 R 15.5  0.1  0:07.71 dd  
  910 cloud-u+  20  0  107920    620    528 R 15.2  0.1  0:08.11 dd  

12. Observe how Ceilometer triggers an alarm (State goes to alarm) and how a new instance is launched:

[root@racedo-rhel7-1 heat(keystone_demo)]# ceilometer alarm-list  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+  
| Alarm ID                            | Name                                            | State | Enabled | Continuous | Alarm condition                | Time constraints |  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+  
| 1610f404-8df7-46ed-b131-6d3797fc9e4e | AutoscalingWordpress-cpu_alarm_low-vinrbn2rdjpx  | ok    | True    | False      | cpu_util < 15.0 during 1 x 600s | None            |  
| 53c124bd-db57-4909-af55-009f5a635937 | AutoscalingWordpress-cpu_alarm_high-42dc5funjeds | alarm | True    | False      | cpu_util > 50.0 during 1 x 60s  | None            |  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+

Wordpress Autoscaling

13. Kill the dd processes (directly from top press k and kill all of them)

14. Wait about 10 minutes, which is the duration of the scale down alarm by default in our template. The state of the alarm in Ceilometer will go to alarm just like before but now due to lack of CPU load. You’ll see how one of the two instances is deleted:

[root@racedo-rhel7-1 heat(keystone_demo)]# ceilometer alarm-list  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+  
| Alarm ID                             | Name                                             | State | Enabled | Continuous | Alarm condition                 | Time constraints |  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+  
| 1610f404-8df7-46ed-b131-6d3797fc9e4e | AutoscalingWordpress-cpu_alarm_low-vinrbn2rdjpx  | alarm | True    | False      | cpu_util < 15.0 during 1 x 600s | None             |  
| 53c124bd-db57-4909-af55-009f5a635937 | AutoscalingWordpress-cpu_alarm_high-42dc5funjeds | ok    | True    | False      | cpu_util > 50.0 during 1 x 60s  | None             |  
+--------------------------------------+--------------------------------------------------+-------+---------+------------+---------------------------------+------------------+  

That’s all.

Multiple Private Networks with Open vSwitch GRE Tunnels and Libvirt

 

Libvirt and GRE Tunnels

GRE tunnels are extremely useful for many reasons. One use case is to be able to design and test an infrastructure requiring multiple networks on a typical home lab with limited hardware, such as laptops and desktops with only 1 ethernet card.

As an example, to design an OpenStack infrastructure for a production environment with RDO or Red Hat Enterprise Linux OpenStack Platform (RHEL OSP) three separate networks are recommended.

These networks will have services such as DHCP (even multiple DHCP servers if eventually needed) as they will be completely isolated from each other. Testing multiple VLANs or trunking is also possible with this setup.

The diagram above should be almost self-explanatory and describes this setup with Open vSwitch, GRE tunnels and Libvirt.

Step by Step on CentOS 6.5

1. Install CentOS 6.5 choosing the Basic Server option

2. Install the EPEL and RDO repos which provide Open vSwitch and iproute Namespace support:

# yum install http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm

3. Install Libvirt, Open vSwitch and virt-install:

# yum install libvirt openvswitch python-virtinst

4. Create the bridge that will be associated to eth0:

# ovs-vsctl add-br br-eth0

5. Set up your network on the br-eth0 bridge with the configuration you had on eth0 and change the eth0 network settings as follows (with your own network settings):

# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
MTU=1546
# cat /etc/sysconfig/network-scripts/ifcfg-br-eth0
DEVICE=br-eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
IPADDR0=192.168.2.1
PREFIX0=24
DNS1=192.168.2.254

Notice the MTU setting above. This is very important as GRE adds encapsulation bytes. There are two options, increasing the MTU in the hosts like in this example or decreasing the MTU in the guests if your NIC doesn’t support MTUs larger than 1500 bytes.

6. Add eth0 to br-eth0 and restart the network to pick up the changes made in the previous step:

# ovs-vsctl add-port br-eth0 eth0 && service network restart

7. Make sure your network still works as it did before the changes above

8. Assuming this host has the IP 192.168.2.1 and you have two other hosts where you will do this same (or compatible) setup with the IPs 192.168.2.2 and 192.168.2.3,  create the internal ovs bridge br-int0 and set the GRE tunnel endpoints gre0 and gre1 (note that the diagram above has only two hosts but a you can add more hosts with identical setup):

# ovs-vsctl add-br br-int0
# ovs-vsctl add-port br-int0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.2.2
# ovs-vsctl add-port br-int0 gre1 -- set interface gre1 type=gre options:remote_ip=192.168.2.3

Notice there is another way to set up GRE tunnels using /etc/sysconfig/network-scripts/ in CentOS/RHEL but the method explained here works in any Linux distro and is equally persistent. Choose whichever you find appropriate.

9. Enable STP (needed for more than 2 hosts):

# ovs-vsctl set bridge br-int0 stp_enable=true

10. Create a file called libvirt-vlabs.xml with the definition of the Libvirt network that will use the Open vSwitch bridge br-int0 (and the GRE tunnels) we just created. Check the diagram above for reference:

<network>
  <name>ovs-network</name>
  <forward mode='bridge'/>
  <bridge name='br-int0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='no-vlan' default='yes'>
  </portgroup>
  <portgroup name='vlan-100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='vlan-200'>
    <vlan>
      <tag id='200'/>
    </vlan>
  </portgroup>
</network>

11. Remove (optionally) the default network that Libvirt creates and add (mandatory) the network defined in the previous step:

# virsh net-destroy default
# virsh net-autostart --disable default
# virsh net-undefine default
# virsh net-define libvirt-vlans.xml
# virsh net-autostart ovs-network
# virsh net-start ovs-network

12. Create a Libvirt storage pool where your VMs will be created (needed to use qcow2 disk format). I chose /home/VMs/pool but it can be anywhere you find appropriate:

# virsh pool-define-as --name VMs-pool --type dir --target /home/VMs/pool/

13. Asuming you are installing a CentOS VM and that the location of the ISO is /home/VMs/ISOs/CentOS-6.5-x86_64-bin-DVD1.iso, create a VM named foreman (or any name you like) with virt-install:

# virt-install \
--name foreman \
--ram 1024 \
--vcpus=1 \
--disk size=20,format=qcow2,pool=VMs-pool \
--nonetworks \
--cdrom /home/VMs/ISOs/CentOS-6.5-x86_64-bin-DVD1.iso \
--graphics vnc,listen=0.0.0.0,keymap=en_gb --noautoconsole --hvm \
--os-variant rhel6

14. Use a VNC client to access the screen of the VM during the installation. Finish the installation and shut down the VM.

15. Edit the VM with virsh edit foreman (following name used in the example above) to add the 3 networks created before. At the bottom of the VM definition, just before </devices> add the following:


<interface type='network'>
  <source network='ovs-network' portgroup='no-vlan'/>
  <model type='virtio'/>
</interface>
<interface type='network'>
  <source network='ovs-network' portgroup='vlan-100'/>
  <model type='virtio'/>
</interface>
<interface type='network'>
  <source network='ovs-network' portgroup='vlan-200'/>
  <model type='virtio'/>
</interface>

Now you can start your VM with virsh start foreman, set up the network (in any or all of the three interfaces). Repeat the same process in another host and VM and you are good to go and install something like Foreman and OpenStack without having to have more than one network interface per host.

Resizing OpenStack Volumes

Hard Drive

Resizing a Volume with Cinder in Havana

Cinder has the extend functionality in Havana which allows an easy resizing procedure of volumes. It works as expected on the volumes. On the OS I have found it less reliable when using resize2fs on the extended volume. Maybe I haven’t done enough tests yet but in any case the method below works in Havana and in Grizzly.

Resizing a Volume in Grizzly and Havana

The following method works in both Grizzly and Havana. It can be entirely done by the tenant with the nova command (the cinder client is not needed).

1. Identify the volume to be resized

$ nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| 44bcd404-8a6e-41d8-9d56-2ac4e0c1e97c | in-use | None | 10 | None | 438cdb78-5573-4ab0-9f89-79cad806286c |
| 010ea497-98d5-4ace-a6aa-bdc847628cee | available | None | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+

2. Detach the volume from its instance. It is recommended to ssh into the instance and to unmount it first.

$ nova volume-detach VM1 44bcd404-8a6e-41d8-9d56-2ac4e0c1e97c

3. Create a snapshot of the volume:

$ nova volume-snapshot-create 44bcd404-8a6e-41d8-9d56-2ac4e0c1e97c
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| status              | creating                             |
| display_name        | None                                 |
| created_at          | 2014-01-16T16:20:17.739982           |
| display_description | None                                 |
| volume_id           | 44bcd404-8a6e-41d8-9d56-2ac4e0c1e97c |
| size                | 10                                   |
| id                  | ea8a1c24-982e-4d63-809f-38f0ad974604 |
| metadata            | {}                                   |
+---------------------+--------------------------------------+

4. Create a new volume from the the snapshot of the volume we are resizing specifying the new desired size:

$ nova volume-create --snapshot-id ea8a1c24-982e-4d63-809f-38f0ad974604 15
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| status              | creating                             |
| display_name        | None                                 |
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| created_at          | 2014-01-16T16:22:10.634404           |
| display_description | None                                 |
| volume_type         | None                                 |
| snapshot_id         | ea8a1c24-982e-4d63-809f-38f0ad974604 |
| source_volid        | None                                 |
| size                | 15                                   |
| id                  | 408a9d90-6498-4e87-a26a-43fe506b1b1d |
| metadata            | {}                                   |
+---------------------+--------------------------------------+

5. Wait until the status of the newly created volume is available and attach it to the instance using another device name (if it originally was /dev/vdc then use /dev/vdd for example):

$ nova volume-attach VM1 408a9d90-6498-4e87-a26a-43fe506b1b1d  /dev/vdd
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| serverId | 438cdb78-5573-4ab0-9f89-79cad806286c |
| id       | 408a9d90-6498-4e87-a26a-43fe506b1b1d |
| volumeId | 408a9d90-6498-4e87-a26a-43fe506b1b1d |
+----------+--------------------------------------+

Note that the snapshot created in step 3 can now be deleted as well as the original volume.

From within the instance OS, assuming it’s a Linux VM, we need to make the OS aware of the new size.

ubuntu@vm1:/$ sudo e2fsck -f /dev/vdc
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdc: 13/65536 files (0.0% non-contiguous), 12637/262144 blocks
ubuntu@vm1:/$ sudo resize2fs /dev/vdc
resize2fs 1.42 (29-Nov-2011)
Resizing the filesystem on /dev/vdc to 1310720 (4k) blocks.
The filesystem on /dev/vdc is now 1310720 blocks long.

ubuntu@vm1:/$ sudo mount /dev/vdc /mnt2
ubuntu@vm1:/$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       9.9G  828M  8.6G   9% /
udev            494M  8.0K  494M   1% /dev
tmpfs           200M  220K  199M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            498M     0  498M   0% /run/shm
/dev/vdb         20G  173M   19G   1% /mnt
/dev/vdc       15.0G   34M 14.7G   1% /mnt2

Notes

  • If resize2fs does not work, try rebooting the instance first. The kernel will come back up fresh and may pick it up fine after reboot.
  • Make sure you changed the block device name (i.e. from vdc to vdd for instance)
  • Do not use partitions (i.e. /dev/vdc1). Check resize2fs for details. It can be done anyway but rebuilding the partition table is needed.

Set up iSCSI Storage for ESXi Hosts From The Command Line

VMware Command Line Interface

The esxcli command line tool can be extremely useful to set up an ESXi host, including iSCSI storage.

1. Enable iSCSI:

~ # esxcli iscsi software set -e true
Software iSCSI Enabled

2. Check the adapter name, usually vmhba32, vmhba33, vmhba34 and so on.

~ # esxcli iscsi adapter list
Adapter Driver State UID Description
------- --------- ------ ------------- ----------------------
vmhba32 iscsi_vmk online iscsi.vmhba32 iSCSI Software Adapter

3. Connect your ESXi iSCSI adapter to your iSCSI target

~ # esxcli iscsi adapter discovery sendtarget add -A vmhba32 -a 10.230.5.60:3260

~ # esxcli iscsi adapter get -A vmhba32
vmhba32
Name: iqn.1998-01.com.vmware:ch02b03-65834587
Alias:
Vendor: VMware
Model: iSCSI Software Adapter
Description: iSCSI Software Adapter
Serial Number:
Hardware Version:
Asic Version:
Firmware Version:
Option Rom Version:
Driver Name: iscsi_vmk
Driver Version:
TCP Protocol Supported: false
Bidirectional Transfers Supported: false
Maximum Cdb Length: 64
Can Be NIC: false
Is NIC: false
Is Initiator: true
Is Target: false
Using TCP Offload Engine: false
Using ISCSI Offload Engine: false

4. Now, on your iSCSI server, assign a volume of the SAN to the IQN of your ESXi host, for example, for a HP StorageWorks:

CLIQ>assignVolume volumeName=racedo-vSphereVolume initiator=iqn.1998-01.com.vmware:ch02b01-01e26a74;iqn.1998-01.com.vmware:ch02b02-20d3e33b;iqn.1998-01.com.vmware:ch02b03-65834587

The above command assigned three IQNs to the volume, two that we already had and the new one we are setting up. This is just for HP StorageWorks CLI, other storage arrays work differently.

5. Back on the ESXi host, discover the targets:

~ # esxcli iscsi adapter discovery rediscover -A vmhba32

Finally, check with the df command that the datastore has been added. If not, try rediscovering again.

This is the simplest configuration possible from the command line. NIC teaming or other more complex setups can also be done from the command line of the ESXi hosts.

Deploying vSphere Remotely From the Command Line

VMware Command Line Interface

If all we have is remote access (ssh) to an ESXi host and we still need install vCenter Server to get vSphere up and running, we can do it with ovftool. The ovftool comes with VMware Workstation and can also be downloaded separately if needed.

The idea is simple: download the vCenter Server OVA appliance to the ESXi host datastore, copy the ovftool to the ESXi host datastore and use it to install the appliance.

This process assumes that the ESXi host has the ssh service enabled.

1. Download the vCenter Server OVA appliance. For example, from a Dropbox folder we share. ESXi comes with wget installed so we can do:

# cd /vmfs/volumes/datastore1
# wget https://dropbox-url/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova

2. Copy the ovftool (with its libraries) to the ESXi server. Just replace bash by sh as the interpreter and copy it from any Linux box with Workstation installed in it:

# vi /usr/lib/vmware-ovftool/ovftool

Replace:

#!/bin/bash

by:

#!/bin/sh

Copy its directory to the ESXi host:

# scp -r /usr/lib/vmware-ovftool/ root@esxi-ip:/vmfs/volumes/datastore1

3. Now, all it’s left is installing the vCenter Server appliance with it:

Find the actual path to the datastore where the appliance and the ovftool were copied:

# ls -l /vmfs/volumes/datastore1
lrwxr-xr-x    1 root     root            35 Nov 26 03:29 /vmfs/volumes/datastore1 -> 52900c65-73c1bf38-6469-001f29e4ff20

Using the full path, run the ovftool to install the vCenter Server appliance:

# /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/vmware-ovftool/ovftool -dm=thin /vmfs/vol
umes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova "vi://root:password@localhost"

The output should be similar to this:

Opening OVA source: /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova
The manifest validates
Source is signed and the certificate validates
Accept SSL fingerprint (B3:DC:DF:58:00:68:A3:92:A9:A4:65:41:B2:F6:FF:CF:99:2A:3E:71) for host localhost as target type.
Fingerprint will be added to the known host file
Write 'yes' or 'no'
yes
Opening VI target: vi://root@localhost:443/
Deploying to VI: vi://root@localhost:443/
Transfer Completed
Completed successfully

When this is finished, the vCenter Server VM is created in the ESXi host.

4. Power on the vCenter Server VM.

First, find out its VM ID:

# vim-cmd vmsvc/getallvms

Assuming the VM ID is 1, power it on with:

# vim-cmd vmsvc/power.on 1

And once it’s up and running, find its IP running:

# vim-cmd vmsvc/get.summary 1|grep ipAddress

Point your browser to that IP on port 5480 to start configuring the the vCenter Server you just deployed.

Add a Swap Partition to Nodes Deployed by MAAS

MAASMAAS or Metal as a Service is a service that, combined with Juju, allows to provision physical machines and even VMs extremely easily via PXE. Setting up MAAS on an Ubuntu Server (I recommend 12.04) takes literally 5 minutes. I recommend to install the version from the Tools pocket in the Cloud Archive, and configure maas-dns and maas-dhcp from the Web UI.

MAAS deploys machines without a swap partition by default. This may cause problems when deploying some services, especially if we use multiple LXC containers or Docker on the same machine.

To have MAAS configuring a swap partition all we need to do is to edit the main preseed file:

vi /etc/maas/preseeds/preseed_master

And right after this line that says:

d-i     partman-auto/method string regular

We add the following:

d-i	partman-auto/text/atomic_scheme :: \
		500 10000 1000000 ext3 \
		        $primary{ } \
       			 $bootable{ } \
        		method{ format } \
        		format{ } \
        		use_filesystem{ } \
        		filesystem{ ext3 } \
        		mountpoint{ / } . \
		64 512 300% linux-swap \
        		method{ swap } \
        		format{ } . \

References: Altering the Preseed File

Setting Up a Flat Network with Neutron

Data Network

This setup will allow the VMs to use an existing network. In this example, eth2 is connected to this pre-existing network (192.168.2.0/24) that we want to use for the OpenStack VMs.

All the configuration is done in the node dedicated to Nova Networking.

1. Set up the Open vSwitch bridge:

# ovs-vsctl add-br br-eth2
# ovs-vsctl add-port br-eth2 eth2

2. Set up /etc/network/interfaces (node’s IP is 192.168.2.7):

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-eth2
iface br-eth2 inet static
address 192.168.2.7
netmask 255.255.255.0

3. Tell Open vSwitch to use the bridge connected to eth2 (br-eth2) and map physnet1 to it /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2

4. Tell Neutron to not to use the metadata proxy in /etc/nova/nova.conf (otherwise you get a HTTP 400 when querying the metadata service):

service_neutron_metadata_proxy=false

Note that the metadata service is managed by Neutron as usual via the neutron-metadata-agent service anyway.

5. Create the network telling to use the physnet1 mapped above to br-eth2:

# neutron net-create flat-provider-network --shared  --provider:network_type flat --provider:physical_network physnet1

6. Create the subnet:

# neutron subnet-create --name flat-provider-subnet --gateway 192.168.2.5 --dns-nameserver 192.168.1.254  --allocation-pool start=192.168.2.100,end=192.168.2.150  flat-provider-network 192.168.2.0/24

That’s it. Now VMs will get an IP of the specified range and will be directly connected to our network via Open vSwitch.