OpenStack lab on your laptop with TripleO and director

cloud

This setup allows us to experiment with OSP director in our own laptop, play with the existing Heat templates, create new ones and understand how TripleO is used to install OpenStack, from the confort of your laptop.

VMware Fusion Professional version is used, but this will also work in VMware Workstation with virtually no changes and in vSphere or VirtualBox with an equivalent setup.

This guide uses the official Red Hat documentation, in particular the Director Installation and Usage.

Architecture


Architecture diagram

Standard RHEL OSP 7 architecture with multiple networks, VLANs, bonding and provisioning from the Undercloud / director node via PXE.

RHEL OSP 7 in laptop - [racedo]

Networks and VLANs

No especial setup is needed for enabling VLAN support in VMware Fusion, we just set the VLANs and their networks in RHEL as usual.

DHCP and PXE

DHCP and PXE are provided by the Undercloud VM.

NAT

VMware Fusion NAT will be used to provide external access to the Controller and Compute VMs via the provisioning and external networks. The VMware Fusion NAT below, configures 10.0.0.2 in your Mac OS X as the default gateway for the VMs, which will be used in the TripleO templates as the default gateway IP.

VMware Fusion Networks

The networks are configured in the VMware Fusion menu in Preferences, then Network.

vmnet9

vmnet10

The provisioning (PXE) network is set up in vmnet9, the rest of the networks in vmnet10.

The above describes the architecture of our laptop lab in VMware Fusion. Now, let’s implement it.

Step 1. Create 3 VMs in VMware Fusion


VM specifications

VM vCPUs Memory Disk NICs Boot device
Undercloud 1 3000 MB 20 GB 2 Disk
Controller 2 3000 MB 20 GB 3 1st NIC
Compute 2 3000 MB 20 GB 3 1st NIC

Disk size

You may want to increase the disk size for the controller to be able to test more or larger images and to the compute node to be able to run more or larger instances. 3GB of memory is enough if you include a swap partition for the compute and controller.

VMware network driver in .vmx file

Make sure the network driver in the three VMs is vmxnet3 and not e1000 so that RHEL shows all of them:

$ grep ethernet[0-9].virtualDev Undercloud.vmwarevm/Undercloud.vmx
ethernet0.virtualDev = "vmxnet3"
ethernet1.virtualDev = "vmxnet3"

ethX vs enoX NIC names

By default, the OSP director images have the kernel boot option net.ifnames=0. This will name the network interfaces as ethX as opposed to enoX. This is why in the Undercloud the interface names are eno16777984 and eno33557248 (default net.ifnames=1) and the Controller and Compute VMs have eth0, eth1 and eth2. This may change in RHEL OSP 7.2.

Undercloud VM Networks

This is the mapping of VMware networks to OS NICs. A OVS bridge br-ctlplane will be created automatically by the installation of the Undercloud.

Networks VMware Network RHEL NIC
External vmnet10 eno33557248
Provisioning vmnet9 eno16777984 / br-ctlplane

Copy the MAC addresses of the controller and compute VMs

Make a note of the MAC addresses of the first vNIC in the Controller and Compute VMs.

Screen Shot 2015-10-29 at 16.12.41

Screen Shot 2015-10-29 at 16.19.03

Step 2. Install the Undercloud


Install RHEL 7.1 in your preferred way in the Undercloud VM and then configure it as follows.

Network interfaces

First, set up the network. 192.168.100.10 will be the external IP in eno33557248 and 10.0.0.10 the provisioning IP in eno16777984.

In /etc/sysconfig/network-scripts/ifcfg-eno33557248

TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eno33557248
DEVICE=eno33557248
ONBOOT=yes
IPADDR=192.168.100.10
PREFIX=24
GATEWAY=192.168.100.2
DNS1=192.168.100.2

And in /etc/sysconfig/network-scripts/ifcfg-eno16777984

TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eno16777984
DEVICE=eno16777984
ONBOOT=yes
IPADDR=10.0.0.10
PREFIX=24

Once the network is set up ssh from your Mac OS X to 192.168.100.10 and not to 10.0.0.10 because the latter will be automatically reconfigured during the Undercloud installation to become the IP of the bridge called br-ctrlplane and you would lose access during the reconfiguration.

Undercloud hostname

The Undercloud needs a fully qualified domain name and it also needs to be present in the /etc/hosts file. For example:

# sudo hostnamectl set-hostname undercloud.osp.poc

And in /etc/hosts:

192.168.100.10 undercloud.osp.poc undercloud

Subscribe RHEL and Install the Undercloud Package

Now, subscribe the RHEL OS to Red Hat’s CDN and enable the required repos.

Then, install the OpenStack client plug-in that will allow us to install the Undercloud

# yum install -y python-rdomanager-oscplugin

Create the user stack

After that, create the stack user, which we will use to do the installation of the Undercloud and later the deployment and management of the Overcloud.

Configure the director

The following undercloud.conf file is a working configuration for this guide, which is mostly self-explanatory.

For a reference of the configuration flags, there’s a documented sample in /usr/share/instack-undercloud/undercloud.conf.sample

Become the stack user and create the file in its home directory.

# su - stack
$ vi ~/undercloud.conf
[DEFAULT]
image_path = /home/stack/images
local_ip = 10.0.0.10/24
undercloud_public_vip = 10.0.0.11
undercloud_admin_vip = 10.0.0.12
local_interface = eno16777984
masquerade_network = 10.0.0.0/24
dhcp_start = 10.0.0.50
dhcp_end = 10.0.0.100
network_cidr = 10.0.0.0/24
network_gateway = 10.0.0.10
discovery_iprange = 10.0.0.100,10.0.0.120
undercloud_debug = true
[auth]

The masquerade_network config flag is optional as in VMware Fusion we already have NAT as explained above, but it might be needed if you use VirtualBox.

Finally, get the Undercloud installed

We will run the installation as the stack user we created

$ openstack undercloud install

Step 3. Set up the Overcloud deployment


Verify the undercloud is working

Load the environment first, then run the service list command:

$ . stackrc
$ openstack service list
+----------------------------------+------------+---------------+
| ID                               | Name       | Type          |
+----------------------------------+------------+---------------+
| 0208564b05b148ed9115f8ab0b04f960 | glance     | image         |
| 0df260095fde40c5ab838affcdbce524 | swift      | object-store  |
| 3b499d3319094de5a409d2c19a725ea8 | heat       | orchestration |
| 44d8d0095adf4f27ac814e1d4a1ef9cd | nova       | compute       |
| 84a1fe11ed464894b7efee7543ecd6d6 | neutron    | network       |
| c092025afc8d43388f67cb9773b1fb27 | keystone   | identity      |
| d1a85475321e4c3fa8796a235fd51773 | nova       | computev3     |
| d5e1ad8cca1549759ad1e936755f703b | ironic     | baremetal     |
| d90cb61c7583494fb1a2cffd590af8e8 | ceilometer | metering      |
| e71d47d820c8476291e60847af89f52f | tuskar     | management    |
+----------------------------------+------------+---------------+

Configure the fake_pxe Ironic driver

Ironic doesn’t have a driver for powering on and off VMware Fusion VMs so we will do it manually. We need to configure the fake_pxe driver for this.

Edit /etc/ironic/ironic.conf and add it:

enabled_drivers = pxe_ipmitool,pxe_ssh,pxe_drac,fake_pxe

Then restart ironic-conductor and verify the driver is loaded:

$ sudo systemctl restart openstack-ironic-conductor
$ ironic driver-list
+---------------------+--------------------+
| Supported driver(s) | Active host(s)     |
+---------------------+--------------------+
| fake_pxe            | undercloud.osp.poc |
| pxe_drac            | undercloud.osp.poc |
| pxe_ipmitool        | undercloud.osp.poc |
| pxe_ssh             | undercloud.osp.poc |
+---------------------+--------------------+

Upload the images into the Undercloud’s Glance

Download the images that will be used to deploy the OpenStack nodes to the directory specified in the image_path in the undercloud.conf file, in our example /home/stack/images. Get the images and untar them as described here. Then upload them into Glance in the Undercloud:

$ openstack overcloud image upload --image-path /home/stack/images/

Define the VMs into the Undercloud’s Ironic

TripleO needs to know about the nodes, in our case the VMware Fusion VMs. We describe them in the file instackenv.json which we’ll create in the home directory of the stack user.

Notice that here is where we use the MAC addresses we took from the two VMs.

{
 "nodes": [
 {
   "arch": "x86_64",
   "cpu": "2",
   "disk": "20",
   "mac": [
   "00:0c:29:8f:1e:7b"
   ],
   "memory": "3000",
   "pm_type": "fake_pxe"
 },
 {
   "arch": "x86_64",
   "cpu": "2",
   "disk": "20",
   "mac": [
   "00:0C:29:41:0F:4E"
   ],
   "memory": "3000",
   "pm_type": "fake_pxe"
 }
 ]
}

Import them to the undercloud:

$ openstack baremetal import --json instackenv.json

The command above adds the nodes to Ironic:

$ ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
| UUID                                 | Name | Instance UUID                        | Power State | Provision State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
| 111cf49a-eb9e-421d-af05-35ab0d74c5d6 | None | 941bbdf9-43c0-442e-8b65-0bd531322509 | power off   | available       | False       |
| e579df9f-528f-4d14-94bc-07b2af4b252f | None | f1bd425b-a4d9-4eca-8bc4-ee31b300e381 | power off   | available       | False       |
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+

To finish the registration of the nodes we run this command:

$ openstack baremetal configure boot

Discover the nodes

At this point we are ready to start discovering the nodes, i.e. having Ironic powering them on, booting with the discovery image that was uploaded before and then shutting them down after the relevant hardware information has been saved in the node metadata in Ironic. This process is called introspection.

Note that as we use the fake_pxe driver, Ironic won’t power on the VMs, so we do it manually in VMware Fusion. We wait until the output of ironic node-list tells us that the power state is on and then we run this command:

$ openstack baremetal introspection bulk start

Assign the roles to the nodes in Ironic

There are two roles in this example, compute and control. We will assign them manually with Ironic.

$ ironic node-update 111cf49a-eb9e-421d-af05-35ab0d74c5d6 add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update e579df9f-528f-4d14-94bc-07b2af4b252f add properties/capabilities='profile:control,boot_option:local'

Create the flavors in Glance and associate them with the roles in ironic

This consists in creating the flavors matching the specs of the VMs and then adding the property control and compute to the corresponding flavors to match Ironic’s as done in the previous step. Then, it also requires a flavor called baremetal.

$ openstack flavor create --id auto --ram 3000 --disk 17 --vcpus 2 --swap 2000 compute
$ openstack flavor create --id auto --ram 3000 --disk 19 --vcpus 2 --swap 1500 control

TripleO also needs a flavor called baremetal (which we won’t use):

$ openstack flavor create --id auto --ram 3000 --disk 19 --vcpus 2 baremetal

Notice the disk size is 1 GB smaller than the VM’s disk. This is a precaution to avoid No valid host found when deploying with Ironic, which sometimes is a bit too sensitive.

Also, notice that I added swap because 3 GB of memory is not enough and the out of memory killer could be triggered otherwise.

Now we make the flavors match with the capabilities we set in the Ironic nodes in the previous step:

$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control
$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute

 

Step 4. Create the TripleO templates


Get the TripleO templates

Copy the TripleO heat templates to the home directory of the stack user.

$ mkdir ~/templates
$ cp -r /usr/share/openstack-tripleo-heat-templates/ ~/templates/

Create the network definitions

These are our network definitions:

Network Subnet VLAN
Provisioning 10.0.0.0/24 VMware native
Internal API 172.16.0.0/24 201
Tenant 172.17.0.0/24 204
Storage 172.18.0.0/24 202
Storage Management 172.19.0.0/24 203
External 192.168.100.0/24 VMware native

To allow creating dedicated networks for specific services we describe them in a Heat template that we can call network-environment.yaml.

$ vi ~/templates/network-environment.yaml
resource_registry:
 OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
 OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml

parameter_defaults:

 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud
 EC2MetadataIp: 10.0.0.10
 # Gateway router for the provisioning network (or Undercloud IP)
 ControlPlaneDefaultRoute: 10.0.0.2
 DnsServers: ["10.0.0.2"]

 InternalApiNetCidr: 172.16.0.0/24
 TenantNetCidr: 172.17.0.0/24
 StorageNetCidr: 172.18.0.0/24
 StorageMgmtNetCidr: 172.19.0.0/24
 ExternalNetCidr: 192.168.100.0/24

 # Leave room for floating IPs in the External allocation pool
 ExternalAllocationPools: [{'start': '192.168.100.100', 'end': '192.168.100.200'}]
 InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
 TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
 StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
 StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]

 InternalApiNetworkVlanID: 201
 StorageNetworkVlanID: 202
 StorageMgmtNetworkVlanID: 203
 TenantNetworkVlanID: 204

 # ExternalNetworkVlanID: 100
 # Set to the router gateway on the external network
 ExternalInterfaceDefaultRoute: 192.168.100.2
 # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
 NeutronExternalNetworkBridge: "br-ex"

 # Customize bonding options if required
 BondInterfaceOvsOptions:
 "bond_mode=active-backup"

More information about this template can be found here.

Configure the NICs of the VMs

We have examples of NIC configurations for multiple networks and bonding in /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/

We will use them as a template to define the Controller and Compute NIC setup.

$ mkdir ~/templates/nic-configs/
$ cp /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/* ~/templates/nic-configs/

Notice that they are called from the previous template network-environment.yaml.

Controller NICs

We want this setup in the controller:

Bonded Interface  Bond Slaves Bond Mode
bond1 eth1, eth2 active-backup
Networks VMware Network RHEL NIC
Provisioning vmnet9 eth0
External vmnet10 bond1 / br-ex
Internal vmnet10 bond1 / vlan201
Tenant vmnet10 bond1 / vlan204
Storage vmnet10 bond1 / vlan202
Storage Management vmnet10 bond1 / vlan203

We only need to modify the resources section of the ~/templates/nic-configs/controller.yaml to match the configuration in the table above:

$ vi ~/templates/nic-configs/controller.yaml
[...]
resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
              type: interface
              name: nic1
              use_dhcp: false
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
            -
              type: ovs_bridge
              name: {get_input: bridge_name}
              addresses:
                - ip_netmask: {get_param: ExternalIpSubnet}
              routes:
                - ip_netmask: 0.0.0.0/0
                  next_hop: {get_param: ExternalInterfaceDefaultRoute}
              dns_servers: {get_param: DnsServers}
              members:
                -
                  type: ovs_bond
                  name: bond1
                  ovs_options: {get_param: BondInterfaceOvsOptions}
                  members:
                    -
                      type: interface
                      name: nic2
                      primary: true
                    -
                      type: interface
                      name: nic3
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: InternalApiIpSubnet}
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: StorageIpSubnet}
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageMgmtNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: StorageMgmtIpSubnet}
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: TenantIpSubnet}

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

Compute NICs

In the compute node we want this setup:

Bonded Interface  Bond Slaves Bond Mode
bond1 eth1, eth2 active-backup
Networks VMware Network RHEL NIC
Provisioning vmnet9 eth0
Internal vmnet10 bond1 / vlan201
Tenant vmnet10 bond1 / vlan204
Storage vmnet10 bond1 / vlan202
$ vi ~/templates/nic-configs/compute.yaml
[...]
resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
              type: interface
              name: nic1
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
                -
                  default: true
                  next_hop: {get_param: ControlPlaneDefaultRoute}
            -
              type: ovs_bridge
              name: {get_input: bridge_name}
              members:
                -
                  type: ovs_bond
                  name: bond1
                  ovs_options: {get_param: BondInterfaceOvsOptions}
                  members:
                    -
                      type: interface
                      name: nic2
                      primary: true
                    -
                      type: interface
                      name: nic3
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: InternalApiIpSubnet}
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: StorageNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: StorageIpSubnet}
                -
                  type: vlan
                  device: bond1
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: TenantIpSubnet}
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

Enable Swap

Enabling the swap partition is done from within the OS. Ironic only creates the partition as instructed in the flavor. This can be done with the templates that allow running first boot scripts via cloud-init.

First, the template for running at cloud-init userdata /home/stack/templates/firstboot/firstboot.yaml

resource_registry:
 OS::TripleO::NodeUserData: /home/stack/templates/firstboot/userdata.yaml

Then, the actual script for enabling swap /home/stack/templates/firstboot/userdata.yaml

heat_template_version: 2014-10-16

resources:
 userdata:
   type: OS::Heat::MultipartMime
   properties:
   parts:
   - config: {get_resource: swapon_config}

 swapon_config:
   type: OS::Heat::SoftwareConfig
   properties:
   config: |
     #!/bin/bash
     swap_device=$(sudo fdisk -l | grep swap | awk '{print $1}')
     if [[ $swap_device && ${swap_device} ]]; then
       rc_local="/etc/rc.d/rc.local"
       echo "swapon $swap_device " >> $rc_local
       chmod 755 $rc_local
       swapon $swap_device
     fi
outputs:
 OS::stack_id:
 value: {get_resource: userdata}

 

Step 5. Deploy the Overcloud


Summary

We have everything we need to deploy now:

  • The Undercloud configured.
  • Flavors for the compute and controller nodes.
  •  Images for the discovery and deployment of the nodes.
  • Templates defining the networks in OpenStack.
  • Templates defining the nodes’ NICs configuration.
  • A first boot script used to enable swap.

We will use all this information when running the deploy command:

$ openstack overcloud deploy \
--templates templates/openstack-tripleo-heat-templates/ \
-e templates/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e templates/network-environment.yaml \
-e templates/firstboot/firstboot.yaml \
--control-flavor control \
--compute-flavor compute \
--neutron-tunnel-types vxlan --neutron-network-type vxlan \
--ntp-server clock.redhat.com

After a successful deployment you’ll see this:

Deploying templates in the directory /home/stack/templates/openstack-tripleo-heat-templates
[...]
Overcloud Endpoint: http://192.168.100.100:5000/v2.0/
Overcloud Deployed

An overcloudrc file with the environment is created for you to start using the new OpenStack environment deployed in your laptop.

Step 6. Start using the Overcloud


Now we are ready to start testing our newly deployed platform.

$ . overcloudrc
$ openstack service list
+----------------------------------+------------+---------------+
| ID | Name | Type |
+----------------------------------+------------+---------------+
| 043524ae126b4f23bd3fb7826a557566 | glance     | image         |
| 3d5c8d48d30b41e9853659ce840ae4fe | neutron    | network       |
| 418d4f34abe449aa8f07dac77c078e9c | nova       | computev3     |
| 43480fab74fd4fd480fdefc56eecfe83 | cinderv2   | volumev2      |
| 4e01d978a648474db6d5b160cd0a71e1 | nova       | compute       |
| 6357f4122d6d41b986dab40d6fb471e3 | cinder     | volume        |
| a49119e0fd9f43c0895142e3b3f3394a | keystone   | identity      |
| b808ae83589646e6b7033f2b150e7623 | horizon    | dashboard     |
| d4c9383fa9e94daf8c74419b0b18fd6e | heat       | orchestration |
| db556409857d4d24872cdc1b718eee8f | swift      | object-store  |
| ddc3c82097d24f478edfc89b46310522 | ceilometer | metering      |
+----------------------------------+------------+---------------+

Set up iSCSI Storage for ESXi Hosts From The Command Line

VMware Command Line Interface

The esxcli command line tool can be extremely useful to set up an ESXi host, including iSCSI storage.

1. Enable iSCSI:

~ # esxcli iscsi software set -e true
Software iSCSI Enabled

2. Check the adapter name, usually vmhba32, vmhba33, vmhba34 and so on.

~ # esxcli iscsi adapter list
Adapter Driver State UID Description
------- --------- ------ ------------- ----------------------
vmhba32 iscsi_vmk online iscsi.vmhba32 iSCSI Software Adapter

3. Connect your ESXi iSCSI adapter to your iSCSI target

~ # esxcli iscsi adapter discovery sendtarget add -A vmhba32 -a 10.230.5.60:3260

~ # esxcli iscsi adapter get -A vmhba32
vmhba32
Name: iqn.1998-01.com.vmware:ch02b03-65834587
Alias:
Vendor: VMware
Model: iSCSI Software Adapter
Description: iSCSI Software Adapter
Serial Number:
Hardware Version:
Asic Version:
Firmware Version:
Option Rom Version:
Driver Name: iscsi_vmk
Driver Version:
TCP Protocol Supported: false
Bidirectional Transfers Supported: false
Maximum Cdb Length: 64
Can Be NIC: false
Is NIC: false
Is Initiator: true
Is Target: false
Using TCP Offload Engine: false
Using ISCSI Offload Engine: false

4. Now, on your iSCSI server, assign a volume of the SAN to the IQN of your ESXi host, for example, for a HP StorageWorks:

CLIQ>assignVolume volumeName=racedo-vSphereVolume initiator=iqn.1998-01.com.vmware:ch02b01-01e26a74;iqn.1998-01.com.vmware:ch02b02-20d3e33b;iqn.1998-01.com.vmware:ch02b03-65834587

The above command assigned three IQNs to the volume, two that we already had and the new one we are setting up. This is just for HP StorageWorks CLI, other storage arrays work differently.

5. Back on the ESXi host, discover the targets:

~ # esxcli iscsi adapter discovery rediscover -A vmhba32

Finally, check with the df command that the datastore has been added. If not, try rediscovering again.

This is the simplest configuration possible from the command line. NIC teaming or other more complex setups can also be done from the command line of the ESXi hosts.

Deploying vSphere Remotely From the Command Line

VMware Command Line Interface

If all we have is remote access (ssh) to an ESXi host and we still need install vCenter Server to get vSphere up and running, we can do it with ovftool. The ovftool comes with VMware Workstation and can also be downloaded separately if needed.

The idea is simple: download the vCenter Server OVA appliance to the ESXi host datastore, copy the ovftool to the ESXi host datastore and use it to install the appliance.

This process assumes that the ESXi host has the ssh service enabled.

1. Download the vCenter Server OVA appliance. For example, from a Dropbox folder we share. ESXi comes with wget installed so we can do:

# cd /vmfs/volumes/datastore1
# wget https://dropbox-url/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova

2. Copy the ovftool (with its libraries) to the ESXi server. Just replace bash by sh as the interpreter and copy it from any Linux box with Workstation installed in it:

# vi /usr/lib/vmware-ovftool/ovftool

Replace:

#!/bin/bash

by:

#!/bin/sh

Copy its directory to the ESXi host:

# scp -r /usr/lib/vmware-ovftool/ root@esxi-ip:/vmfs/volumes/datastore1

3. Now, all it’s left is installing the vCenter Server appliance with it:

Find the actual path to the datastore where the appliance and the ovftool were copied:

# ls -l /vmfs/volumes/datastore1
lrwxr-xr-x    1 root     root            35 Nov 26 03:29 /vmfs/volumes/datastore1 -> 52900c65-73c1bf38-6469-001f29e4ff20

Using the full path, run the ovftool to install the vCenter Server appliance:

# /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/vmware-ovftool/ovftool -dm=thin /vmfs/vol
umes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova "vi://root:password@localhost"

The output should be similar to this:

Opening OVA source: /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova
The manifest validates
Source is signed and the certificate validates
Accept SSL fingerprint (B3:DC:DF:58:00:68:A3:92:A9:A4:65:41:B2:F6:FF:CF:99:2A:3E:71) for host localhost as target type.
Fingerprint will be added to the known host file
Write 'yes' or 'no'
yes
Opening VI target: vi://root@localhost:443/
Deploying to VI: vi://root@localhost:443/
Transfer Completed
Completed successfully

When this is finished, the vCenter Server VM is created in the ESXi host.

4. Power on the vCenter Server VM.

First, find out its VM ID:

# vim-cmd vmsvc/getallvms

Assuming the VM ID is 1, power it on with:

# vim-cmd vmsvc/power.on 1

And once it’s up and running, find its IP running:

# vim-cmd vmsvc/get.summary 1|grep ipAddress

Point your browser to that IP on port 5480 to start configuring the the vCenter Server you just deployed.

Enable Console Access to vSphere instances in OpenStack

vSphere Instance Console

Instance consoles are not working by default and require configuration in both, ESXi hosts and Nova Compute / Nova API nodes.

1. Nova API and Nova Compute nodes (usually the same node when using OpenStack and vSphere as compute) have the following in  /etc/nova/nova.conf (this assumes its IP is 192.168.2.7):

vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.2.7
novncproxy_base_url=http://192.168.2.7:6080/vnc_auto.html
vnc_enabled=True

Restart the services:

$ sudo restart nova-compute
$ sudo restart nova-api
$ sudo restart nova-console
$ sudo restart nova-consoleauth
$ sudo restart nova-novncproxy

2. ESXi setup.

ssh the ESXi host and check what ports the launched instances are listening on, these ports are where the embedded VNC listens on:

~ # esxcli network ip connection list|grep vmx
tcp         0       0  192.168.2.200:6111  192.168.2.7:50754   ESTABLISHED    434739  vmx-mks:92901823-a03c-4cdd-bbb6-616a8742388a
tcp         0       0  0.0.0.0:6111        0.0.0.0:0           LISTEN         434735  vmx
tcp         0       0  0.0.0.0:6102        0.0.0.0:0           LISTEN         250526  vmx
tcp         0       0  0.0.0.0:6101        0.0.0.0:0           LISTEN          11204  vmx

This can be confirmed by checking the .vmx file for the instances (this is set up by VMwareVCDriver):

~ # grep vnc.port /vmfs/volumes/datastore1/*/*vmx
/vmfs/volumes/datastore1/52c84203-ce3d-47b4-ab22-1d30b2816298/52c84203-ce3d-47b4-ab22-1d30b2816298.vmx:RemoteDisplay.vnc.port = "6102"
/vmfs/volumes/datastore1/92901823-a03c-4cdd-bbb6-616a8742388a/92901823-a03c-4cdd-bbb6-616a8742388a.vmx:RemoteDisplay.vnc.port = "6111"
/vmfs/volumes/datastore1/c4e7264e-a4f7-4dea-87c2-6561b86fb85d/c4e7264e-a4f7-4dea-87c2-6561b86fb85d.vmx:RemoteDisplay.vnc.port = "6101"

In general, you will notice these two config flags in the .vmx files:

RemoteDisplay.vnc.enabled = TRUE
RemoteDisplay.vnc.port = port_number

Now you need to open these ports:

~ # chmod 644 /etc/vmware/firewall/service.xml
~ # chmod +t /etc/vmware/firewall/service.xml
~ # vi /etc/vmware/firewall/service.xml

And append this:

<service id='0033'>
<id>VNC</id>
<rule id='0000'>
<direction>inbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5900</begin>
<end>6199</end>
</port>
</rule>
</service>

Close vi with:

:x!

Refresh the firewall rules:

~ # esxcli network firewall refresh
~ # esxcli network firewall ruleset set --ruleset-id VNC --enabled true

Done.

Note: there are multiple ways to keep the firewall configuration after ESXi reboots, please review them and chose one of them to make this change permanent.

Using the vSphere SDK/API in Ubuntu

The Perl vSphere SDK allows to use the esxcli and vmware-cmd from Linux which can be very convenient to script tasks and remotely manage vSphere VMs.

This is the setup for Ubuntu 12.04:

1. Install its requirements:

$ sudo apt-get install ia32-libs build-essential gcc uuid uuid-dev perl libssl-dev perl-doc liburi-perl libxml-libxml-perl libcrypt-ssleay-perl cpanminus
$ cpan -i LWP::UserAgent
$ cpan GAAS/Net-HTTP-6.03.tar.gz
$ cpan install GAAS/libwww-perl-6.03.tar.gz

2. Download and Install the package VMware-vSphere-Perl-SDK-5.5.0-292267.x86_64.tar

3. Untar VMware-vSphere-Perl-SDK-5.5.0-1292267.x86_64.tar and install it:

$ sudo ./vmware-vsphere-cli-distrib/vmware-install.pl

4. Test it.

Run it to list VMs:

$ vmware-cmd -H 192.168.2.202 -U root -P ubuntu1! -l

/vmfs/volumes/51fe1a85-bb18fa16-4836-c8cbb8c706f5/vCenter 5.1/vCenter 5.1.vmx

Power on a VM

$ vmware-cmd -H 192.168.2.202 -U root -P ubuntu "/vmfs/volumes/51fe1a85-bb18fa16-4836-c8cbb8c706f5/vCenter 5.1/vCenter 5.1.vmx” start

Test the esxcli command on the ESXi hosts to see the existing network connections:

$ esxcli -s 192.168.2.22 -h 192.168.2.200 -u root -p ubuntu1! network ip connection list|head
Proto  Recv Q  Send Q  Local Address       Foreign Address     State        World ID  World Name
-----  ------  ------  ------------------  ------------------  -----------  --------  --------------------------------------------
tcp         0       0  127.0.0.1:5988      127.0.0.1:60509     FIN_WAIT_2       3513  sfcb-HTTP-Daemo
tcp         0       0  127.0.0.1:60509     127.0.0.1:5988      CLOSE_WAIT       2937  hostd-worker
tcp         0       0  192.168.2.200:5989  192.168.2.22:49411  FIN_WAIT_2       3509  sfcb-HTTPS-Daem
tcp         0       0  127.0.0.1:8307      127.0.0.1:58948     ESTABLISHED    251019  hostd-worker
tcp         0       0  127.0.0.1:58948     127.0.0.1:8307      ESTABLISHED      3183  rhttpproxy-work
tcp         0       0  127.0.0.1:80        127.0.0.1:56343     ESTABLISHED      3183  rhttpproxy-work
tcp         0       0  127.0.0.1:56343     127.0.0.1:80        ESTABLISHED      3715  sfcb-vmware_bas
tcp         0       0  192.168.2.200:6102  192.168.2.7:55644   ESTABLISHED    250530  vmx-mks:52c84203-ce3d-47b4-ab22-1d30b2816298

Using virsh to Manage VMware VMs

By default, libvirt and its command virsh are not compiled with the VMware extension:

$ virsh -c esx://192.168.2.202
error: failed to connect to the hypervisor
error: unsupported configuration: libvirt was built without the 'esx' driver

Fortunately this ppa has a version with the esx driver in it. Just add it and install it in Ubuntu 12.04 or Ubuntu 13.10:

$ sudo add-apt-repository ppa:zulcss/esx
$ sudo apt-get update
$ sudo apt-get install libvirt-bin

In an ESXi host:

$ virsh -c esx://root@192.168.2.200?no_verify=1 list --all
Enter root's password for 192.168.2.200:
 Id    Name                           State
----------------------------------------------------
 -     077c7a49-a328-482b-9a9e-e0e0bfbdebd5 shut off
 -     1b6ea2ac-a24e-4497-b75e-0c53db246e16 shut off
 -     maas-node-1                    shut off
 -     nova-compute                   shut off

Start a VM:

$ virsh -c esx://root@192.168.2.202?no_verify=1 start "vCenter 5.1"
Enter root's password for 192.168.2.202:
Domain vCenter 5.1 started

In the vCenter Server:

$ virsh -c 'vpx://root@192.168.2.22/Fusion%20Datacenter/Fusion%20Cluster/192.168.2.202?no_verify=1' list --all
Enter root's password for 192.168.2.22:
 Id    Name                           State
----------------------------------------------------
 171   vCenter 5.1                    running

More info in the libvirt ESX hypervisor driver documentation web page.