vSphere Integrated Containers Part 2 – Prerequisites and Deployment

 
The second part of the vSphere Integrated Containers series will be focusing on the prerequisites and the deployment. I will list the requirements for the Docker client and VCH in vSphere as well as go through the process of deploying the VCH. We will also run a couple tests at the end to make sure that VIC is working correctly. If you haven’t had a chance to go through vSphere Integrated Containers Part 1 – Getting started, I suggest taking a look to understand the components of VIC; otherwise, let’s get started!
 

Prerequisites

 
vic-machine

  • Windows 7 or 10
  • Mac OSX 10.11 (El Capitan) Note: I’m running on Sierra without any issues.
  • Ubuntu 16.04 LTS

 
vSphere requirements

  • vSphere 6.0 or higher

One of the following configurations

  • DRS enabled cluster
  • Standalone hosts managed by vCenter
  • Standalone hosts unmanaged by vCenter
  • Enterprise plus licensing

 
ESXi Requirements

  • Allow outbound TCP port 2377
  • Allow inbound HTTPS/TCP on port 443

 

Download and Install Docker Toolbox for Mac or Windows

 
Click here to download Docker Toolbox and install it on your windows machine or mac to use as a client.
 

Deplying VCH

 
It’s possible to deploy a VCH to a standalone ESXi host, but I don’t see many use cases for that so I will deploy it to a vCenter Server cluster.
 
First, download the VIC engine bundle, the link is on Part 1 – Getting started with VIC at the bottom of the page. Download both the engine and the registry. When you download the engine, the following binaries are zipped (Credit)
 

File Description
appliance.iso The Photon based boot image for the virtual container host (VCH) endpoint VM
bootstrap.iso The Photon based boot image for the container VMs.
ui/ A folder that contains the files and scripts for the deployment of the vSphere Web Client Plug-in for vSphere Integrated Containers Engine.
vic-machine-darwin The OSX command line utility for the installation and management of VCHs.
vic-machine-linux The Linux command line utility for the installation and management of VCHs.
vic-machine-windows.exe The Windows command line utility for the installation and management of VCHs.
vic-ui-darwin The OSX executable for the deployment of the vSphere Web Client Plug-in for vSphere Integrated Containers Engine.

NOTE: Do not run this executable directly.(1)

vic-ui-linux The Linux executable for the deployment of the vSphere Web Client Plug-in for vSphere Integrated Containers Engine.

NOTE: Do not run this executable directly.(1)

vic-ui-windows.exe The Windows executable for the deployment of the vSphere Web Client Plug-in for vSphere Integrated Containers Engine.

NOTE: Do not run this executable directly.(1)

README Contains a link to the vSphere Integrated Containers Engine repository on GitHub.
LICENSE The license file for vSphere Integrated Containers Engine

 
 
 
Step 1. Create a VDS port group with the name “vic-bridge” Optionally, create port groups for the client, management, and public networks.
 

 
Step 2. Once you have unzipped the engine bundle, navigate to the directory with a terminal. Run the following command. For windows or linux, change to vic-machine-(windows|linx) create. For a full list of create commands click here. I have specified static IPs, hence I have a lot more input in my command. This command wasn’t the most fun to input and it took a lot of playing around, using DHCP would cut this to about one line as you would just need to specify the bridge network port group.
 

./vic-machine-darwin create -t vcsa1.corp.local -u "administrator@vsphere.local" -p VMware1! -n harbor -r mgmt-edge-compute -i drobo1 -vs "drobo1/ContainerVolumeStore:default" -b vic-bridge --bnr 10.10.0.0/12 -cln vic-all --dns-server 172.16.10.2 -pn vic-all -mn vic-all -cn vic-all --public-network-ip 172.16.10.48 --public-network-gateway 172.16.10.1/24 --registry-ca ./HarborCert/ca.crt --no-tlsverify -f 

 

Resolved Errors

 
Note: I ended up hitting several errors below. I decided to put the resolutions of each of those here in case anyone else hits the same issues.
 
Issue 1:

ERRO[2017-03-31T11:57:33-06:00] Unable to load certificates: cname option doesn't match existing server certificate in certificate path virtual-container-host 
ERRO[2017-03-31T11:57:33-06:00] --------------------                         
ERRO[2017-03-31T11:57:33-06:00] vic-machine-darwin failed: cname option doesn't match existing server certificate in certificate path virtual-container-host

 
Resolution: After a couple of failed attempts, I needed new certificates because of naming changes. I navigated to the folder “virtual-container-host” and moved both certificates to folder called “Old Certs” so that it would generate new ones.
 
Issue 2:

ERRO[2017-03-31T12:37:48-06:00] --------------------                         
ERRO[2017-03-31T12:37:48-06:00] Static IP on network sharing port group with public network - Configuration ONLY allowed through public network options 
ERRO[2017-03-31T12:37:48-06:00] Failed to configure static IP for additional networks using port group "vic-public-and-mgmt" 
ERRO[2017-03-31T12:37:48-06:00] client network gateway specified without at least one routing destination 
ERRO[2017-03-31T12:37:48-06:00] Firewall must permit dst 2377/tcp outbound to the VCH management interface 
ERRO[2017-03-31T12:37:48-06:00] Create cannot continue: configuration validation failed 
ERRO[2017-03-31T12:37:48-06:00] --------------------                         
ERRO[2017-03-31T12:37:48-06:00] vic-machine-darwin failed: validation of configuration failed

 
Resolution: Edit Firewall on ESXi Host
 
Issue 3:

ERRO[2017-03-31T13:22:44-06:00] client network gateway specified without at least one routing destination 
ERRO[2017-03-31T13:22:44-06:00] Create cannot continue: configuration validation failed 
ERRO[2017-03-31T13:22:44-06:00] --------------------                         
ERRO[2017-03-31T13:22:44-06:00] vic-machine-darwin failed: validation of configuration failed

 
Resolution: Specify a routing destination for client gateway
 

--client-network-gateway routing_destination_1/subnet_mask,
routing_destination_2/subnet_mask:
gateway_address/subnet_mask

 
Issue 4:

ERRO[2017-03-31T13:33:08-06:00] Failed to collect 1bbaa091-57e3-4987-b86e-3d38262eb2c2 vpxd.log: Post https://vcsa1.corp.local/sdk: net/http: request canceled while waiting for connection 
WARN[2017-03-31T13:33:08-06:00] No log data for 1bbaa091-57e3-4987-b86e-3d38262eb2c2 vpxd.log 
ERRO[2017-03-31T13:33:08-06:00] --------------------                         
ERRO[2017-03-31T13:33:08-06:00] vic-machine-darwin failed: Create timed out: if slow connection, increase timeout with --timeout

 
Resolution: Delete the VCH instance and specify bridge network in CIDR with a minimum of a /12. I am not sure why it needs this many networks, but I’ll save everyone the pain that tries to specify /16 or higher. I also needed to specify the bridge network since the default is 172.16.0.0/12 and that overlapped with my public network which was causing the same error.
 
 
Phew… got through all the errors with some log review and patience. Once complete you will see the following.
 

INFO[2017-03-31T15:47:50-06:00] Initialization of appliance successful       
INFO[2017-03-31T15:47:50-06:00]                                              
INFO[2017-03-31T15:47:50-06:00] VCH Admin Portal:                            
INFO[2017-03-31T15:47:50-06:00] https://172.16.10.45:2378                   
INFO[2017-03-31T15:47:50-06:00]                                              
INFO[2017-03-31T15:47:50-06:00] Published ports can be reached at:           
INFO[2017-03-31T15:47:50-06:00] 172.16.10.45                                 
INFO[2017-03-31T15:47:50-06:00]                                              
INFO[2017-03-31T15:47:50-06:00] Docker environment variables:                
INFO[2017-03-31T15:47:50-06:00] DOCKER_HOST=172.16.10.45:2376               
INFO[2017-03-31T15:47:50-06:00]                                              
INFO[2017-03-31T15:47:50-06:00] Environment saved in virtual-container-host/virtual-container-host.env 
INFO[2017-03-31T15:47:50-06:00]                                              
INFO[2017-03-31T15:47:50-06:00] Connect to docker:                           
INFO[2017-03-31T15:47:50-06:00] docker -H 172.16.10.45:2376 --tls info      
INFO[2017-03-31T15:47:50-06:00] Installer completed successfully

 

Verify VCH Connectivity with Docker Client

To verify connectivity, run the following command.
 

docker -H 192.168.0.246:2376 --tls info

 
If you get the error message below, you need to run the command “export DOCKER_API_VERSION=1.23” as shown below. For windows the command is “SET DOCKER_API_VERSION=1.23”
 

swhitney-m01:vic swhitney$ docker -H 172.16.10.45:2376 --tls info
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
swhitney-m01:vic swhitney$ export DOCKER_API_VERSION=1.23
swhitney-m01:vic swhitney$ docker -H 172.16.10.45:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores: 
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 28352 Mhz
 VCH memory limit: 113.5 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.0.0
Plugins: 
 Volume: 
 Network: bridge vic-container
Swarm: 
 NodeID: 
 Is Manager: false
 Node Address: 
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 28352
Total Memory: 113.5 GiB
Name: virtual-container-host
ID: vSphere Integrated Containers
Docker Root Dir: 
Debug Mode (client): false
Debug Mode (server): false
Registry: registry-1.docker.io
Experimental: false
Live Restore Enabled: false

 
We can also see the VCH deployed in a vApp in vCenter Server!
 

 
Let’s pull down a container and deploy it to ensure everything is functioning correctly. Run the following command to pull busybox from the VCH. Mine seemed to take a few minutes.
 

swhitney-m01:vic swhitney$ docker -H 172.16.10.45:2376 --tls pull busybox
Using default tag: latest
Pulling from library/busybox
7520415ce762: Pull complete 
a3ed95caeb02: Pull complete 
Digest: sha256:8d7fe3e157e56648ab790794970fbdfe82c84af79e807443b98df92c822a9b9b
Status: Downloaded newer image for library/busybox:latest

 
Then, you can run the container!
 

docker -H 172.16.10.45:2376 --tls run -it --name containersftw busybox

 
We can see the VM in the vSphere Web Client as shown below.
 

 
There are many commands to play around with from the Docker client. Not all of the commands will be supported and we are continually adding more. For now, the list of available supported commands should be sufficient for enterprise deployments and developers who are already using or looking into Docker. I will be writing additional posts when I get the chance to play around with the docker commands as well as a networking deep dive entailing connecting containers to external networks for testing.
 

Posted by:

Sean Whitney

Leave A Comment

Your email address will not be published. Required fields are marked (required):

Back to Top