Deploying vSphere Remotely From the Command Line

VMware Command Line Interface

If all we have is remote access (ssh) to an ESXi host and we still need install vCenter Server to get vSphere up and running, we can do it with ovftool. The ovftool comes with VMware Workstation and can also be downloaded separately if needed.

The idea is simple: download the vCenter Server OVA appliance to the ESXi host datastore, copy the ovftool to the ESXi host datastore and use it to install the appliance.

This process assumes that the ESXi host has the ssh service enabled.

1. Download the vCenter Server OVA appliance. For example, from a Dropbox folder we share. ESXi comes with wget installed so we can do:

# cd /vmfs/volumes/datastore1
# wget https://dropbox-url/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova

2. Copy the ovftool (with its libraries) to the ESXi server. Just replace bash by sh as the interpreter and copy it from any Linux box with Workstation installed in it:

# vi /usr/lib/vmware-ovftool/ovftool

Replace:

#!/bin/bash

by:

#!/bin/sh

Copy its directory to the ESXi host:

# scp -r /usr/lib/vmware-ovftool/ root@esxi-ip:/vmfs/volumes/datastore1

3. Now, all it’s left is installing the vCenter Server appliance with it:

Find the actual path to the datastore where the appliance and the ovftool were copied:

# ls -l /vmfs/volumes/datastore1
lrwxr-xr-x    1 root     root            35 Nov 26 03:29 /vmfs/volumes/datastore1 -> 52900c65-73c1bf38-6469-001f29e4ff20

Using the full path, run the ovftool to install the vCenter Server appliance:

# /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/vmware-ovftool/ovftool -dm=thin /vmfs/vol
umes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova "vi://root:password@localhost"

The output should be similar to this:

Opening OVA source: /vmfs/volumes/52900c65-73c1bf38-6469-001f29e4ff20/VMware-vCenter-Server-Appliance-5.1.0.5300-947940_OVF10.ova
The manifest validates
Source is signed and the certificate validates
Accept SSL fingerprint (B3:DC:DF:58:00:68:A3:92:A9:A4:65:41:B2:F6:FF:CF:99:2A:3E:71) for host localhost as target type.
Fingerprint will be added to the known host file
Write 'yes' or 'no'
yes
Opening VI target: vi://root@localhost:443/
Deploying to VI: vi://root@localhost:443/
Transfer Completed
Completed successfully

When this is finished, the vCenter Server VM is created in the ESXi host.

4. Power on the vCenter Server VM.

First, find out its VM ID:

# vim-cmd vmsvc/getallvms

Assuming the VM ID is 1, power it on with:

# vim-cmd vmsvc/power.on 1

And once it’s up and running, find its IP running:

# vim-cmd vmsvc/get.summary 1|grep ipAddress

Point your browser to that IP on port 5480 to start configuring the the vCenter Server you just deployed.

Advertisements

Add a Swap Partition to Nodes Deployed by MAAS

MAASMAAS or Metal as a Service is a service that, combined with Juju, allows to provision physical machines and even VMs extremely easily via PXE. Setting up MAAS on an Ubuntu Server (I recommend 12.04) takes literally 5 minutes. I recommend to install the version from the Tools pocket in the Cloud Archive, and configure maas-dns and maas-dhcp from the Web UI.

MAAS deploys machines without a swap partition by default. This may cause problems when deploying some services, especially if we use multiple LXC containers or Docker on the same machine.

To have MAAS configuring a swap partition all we need to do is to edit the main preseed file:

vi /etc/maas/preseeds/preseed_master

And right after this line that says:

d-i     partman-auto/method string regular

We add the following:

d-i	partman-auto/text/atomic_scheme :: \
		500 10000 1000000 ext3 \
		        $primary{ } \
       			 $bootable{ } \
        		method{ format } \
        		format{ } \
        		use_filesystem{ } \
        		filesystem{ ext3 } \
        		mountpoint{ / } . \
		64 512 300% linux-swap \
        		method{ swap } \
        		format{ } . \

References: Altering the Preseed File

Setting Up a Flat Network with Neutron

Data Network

This setup will allow the VMs to use an existing network. In this example, eth2 is connected to this pre-existing network (192.168.2.0/24) that we want to use for the OpenStack VMs.

All the configuration is done in the node dedicated to Nova Networking.

1. Set up the Open vSwitch bridge:

# ovs-vsctl add-br br-eth2
# ovs-vsctl add-port br-eth2 eth2

2. Set up /etc/network/interfaces (node’s IP is 192.168.2.7):

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-eth2
iface br-eth2 inet static
address 192.168.2.7
netmask 255.255.255.0

3. Tell Open vSwitch to use the bridge connected to eth2 (br-eth2) and map physnet1 to it /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2

4. Tell Neutron to not to use the metadata proxy in /etc/nova/nova.conf (otherwise you get a HTTP 400 when querying the metadata service):

service_neutron_metadata_proxy=false

Note that the metadata service is managed by Neutron as usual via the neutron-metadata-agent service anyway.

5. Create the network telling to use the physnet1 mapped above to br-eth2:

# neutron net-create flat-provider-network --shared  --provider:network_type flat --provider:physical_network physnet1

6. Create the subnet:

# neutron subnet-create --name flat-provider-subnet --gateway 192.168.2.5 --dns-nameserver 192.168.1.254  --allocation-pool start=192.168.2.100,end=192.168.2.150  flat-provider-network 192.168.2.0/24

That’s it. Now VMs will get an IP of the specified range and will be directly connected to our network via Open vSwitch.

Default User and Password in Ubuntu Cloud Images

Backdoor

Ubuntu Cloud Images don’t have any password for the root or ubuntu users by default, relying only on using ssh keys. It is convenient for many tenants to just having a user and a password to log in rather than only a ssh key.

This is how to set a password for the ubuntu user:

1. Download the image:

$ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

2. Download backdoor-image:

$ bzr branch lp:~smoser/+junk/backdoor-image

3. Set the user and password:

$ sh backdoor-image --user ubuntu --password ubuntu --password-auth ubuntu-12.04-server-cloudimg-amd64-disk1.img

Now we can deploy the image, for example in our OpenStack infrastructure, and we can ssh into it with ubuntu as user and password or we can login via the console.

Enable Console Access to vSphere instances in OpenStack

vSphere Instance Console

Instance consoles are not working by default and require configuration in both, ESXi hosts and Nova Compute / Nova API nodes.

1. Nova API and Nova Compute nodes (usually the same node when using OpenStack and vSphere as compute) have the following in  /etc/nova/nova.conf (this assumes its IP is 192.168.2.7):

vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.2.7
novncproxy_base_url=http://192.168.2.7:6080/vnc_auto.html
vnc_enabled=True

Restart the services:

$ sudo restart nova-compute
$ sudo restart nova-api
$ sudo restart nova-console
$ sudo restart nova-consoleauth
$ sudo restart nova-novncproxy

2. ESXi setup.

ssh the ESXi host and check what ports the launched instances are listening on, these ports are where the embedded VNC listens on:

~ # esxcli network ip connection list|grep vmx
tcp         0       0  192.168.2.200:6111  192.168.2.7:50754   ESTABLISHED    434739  vmx-mks:92901823-a03c-4cdd-bbb6-616a8742388a
tcp         0       0  0.0.0.0:6111        0.0.0.0:0           LISTEN         434735  vmx
tcp         0       0  0.0.0.0:6102        0.0.0.0:0           LISTEN         250526  vmx
tcp         0       0  0.0.0.0:6101        0.0.0.0:0           LISTEN          11204  vmx

This can be confirmed by checking the .vmx file for the instances (this is set up by VMwareVCDriver):

~ # grep vnc.port /vmfs/volumes/datastore1/*/*vmx
/vmfs/volumes/datastore1/52c84203-ce3d-47b4-ab22-1d30b2816298/52c84203-ce3d-47b4-ab22-1d30b2816298.vmx:RemoteDisplay.vnc.port = "6102"
/vmfs/volumes/datastore1/92901823-a03c-4cdd-bbb6-616a8742388a/92901823-a03c-4cdd-bbb6-616a8742388a.vmx:RemoteDisplay.vnc.port = "6111"
/vmfs/volumes/datastore1/c4e7264e-a4f7-4dea-87c2-6561b86fb85d/c4e7264e-a4f7-4dea-87c2-6561b86fb85d.vmx:RemoteDisplay.vnc.port = "6101"

In general, you will notice these two config flags in the .vmx files:

RemoteDisplay.vnc.enabled = TRUE
RemoteDisplay.vnc.port = port_number

Now you need to open these ports:

~ # chmod 644 /etc/vmware/firewall/service.xml
~ # chmod +t /etc/vmware/firewall/service.xml
~ # vi /etc/vmware/firewall/service.xml

And append this:

<service id='0033'>
<id>VNC</id>
<rule id='0000'>
<direction>inbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5900</begin>
<end>6199</end>
</port>
</rule>
</service>

Close vi with:

:x!

Refresh the firewall rules:

~ # esxcli network firewall refresh
~ # esxcli network firewall ruleset set --ruleset-id VNC --enabled true

Done.

Note: there are multiple ways to keep the firewall configuration after ESXi reboots, please review them and chose one of them to make this change permanent.

Using the vSphere SDK/API in Ubuntu

The Perl vSphere SDK allows to use the esxcli and vmware-cmd from Linux which can be very convenient to script tasks and remotely manage vSphere VMs.

This is the setup for Ubuntu 12.04:

1. Install its requirements:

$ sudo apt-get install ia32-libs build-essential gcc uuid uuid-dev perl libssl-dev perl-doc liburi-perl libxml-libxml-perl libcrypt-ssleay-perl cpanminus
$ cpan -i LWP::UserAgent
$ cpan GAAS/Net-HTTP-6.03.tar.gz
$ cpan install GAAS/libwww-perl-6.03.tar.gz

2. Download and Install the package VMware-vSphere-Perl-SDK-5.5.0-292267.x86_64.tar

3. Untar VMware-vSphere-Perl-SDK-5.5.0-1292267.x86_64.tar and install it:

$ sudo ./vmware-vsphere-cli-distrib/vmware-install.pl

4. Test it.

Run it to list VMs:

$ vmware-cmd -H 192.168.2.202 -U root -P ubuntu1! -l

/vmfs/volumes/51fe1a85-bb18fa16-4836-c8cbb8c706f5/vCenter 5.1/vCenter 5.1.vmx

Power on a VM

$ vmware-cmd -H 192.168.2.202 -U root -P ubuntu "/vmfs/volumes/51fe1a85-bb18fa16-4836-c8cbb8c706f5/vCenter 5.1/vCenter 5.1.vmx” start

Test the esxcli command on the ESXi hosts to see the existing network connections:

$ esxcli -s 192.168.2.22 -h 192.168.2.200 -u root -p ubuntu1! network ip connection list|head
Proto  Recv Q  Send Q  Local Address       Foreign Address     State        World ID  World Name
-----  ------  ------  ------------------  ------------------  -----------  --------  --------------------------------------------
tcp         0       0  127.0.0.1:5988      127.0.0.1:60509     FIN_WAIT_2       3513  sfcb-HTTP-Daemo
tcp         0       0  127.0.0.1:60509     127.0.0.1:5988      CLOSE_WAIT       2937  hostd-worker
tcp         0       0  192.168.2.200:5989  192.168.2.22:49411  FIN_WAIT_2       3509  sfcb-HTTPS-Daem
tcp         0       0  127.0.0.1:8307      127.0.0.1:58948     ESTABLISHED    251019  hostd-worker
tcp         0       0  127.0.0.1:58948     127.0.0.1:8307      ESTABLISHED      3183  rhttpproxy-work
tcp         0       0  127.0.0.1:80        127.0.0.1:56343     ESTABLISHED      3183  rhttpproxy-work
tcp         0       0  127.0.0.1:56343     127.0.0.1:80        ESTABLISHED      3715  sfcb-vmware_bas
tcp         0       0  192.168.2.200:6102  192.168.2.7:55644   ESTABLISHED    250530  vmx-mks:52c84203-ce3d-47b4-ab22-1d30b2816298

Using virsh to Manage VMware VMs

By default, libvirt and its command virsh are not compiled with the VMware extension:

$ virsh -c esx://192.168.2.202
error: failed to connect to the hypervisor
error: unsupported configuration: libvirt was built without the 'esx' driver

Fortunately this ppa has a version with the esx driver in it. Just add it and install it in Ubuntu 12.04 or Ubuntu 13.10:

$ sudo add-apt-repository ppa:zulcss/esx
$ sudo apt-get update
$ sudo apt-get install libvirt-bin

In an ESXi host:

$ virsh -c esx://root@192.168.2.200?no_verify=1 list --all
Enter root's password for 192.168.2.200:
 Id    Name                           State
----------------------------------------------------
 -     077c7a49-a328-482b-9a9e-e0e0bfbdebd5 shut off
 -     1b6ea2ac-a24e-4497-b75e-0c53db246e16 shut off
 -     maas-node-1                    shut off
 -     nova-compute                   shut off

Start a VM:

$ virsh -c esx://root@192.168.2.202?no_verify=1 start "vCenter 5.1"
Enter root's password for 192.168.2.202:
Domain vCenter 5.1 started

In the vCenter Server:

$ virsh -c 'vpx://root@192.168.2.22/Fusion%20Datacenter/Fusion%20Cluster/192.168.2.202?no_verify=1' list --all
Enter root's password for 192.168.2.22:
 Id    Name                           State
----------------------------------------------------
 171   vCenter 5.1                    running

More info in the libvirt ESX hypervisor driver documentation web page.