What are vSphere Integrated Containers (VIC)
vSphere Integrated Containers was announced as GA back in December of 2016. The solution provides a platform for developers to deploy containers in a vSphere Environment. On the front end, the developer teams would use their Docker client on their laptop while the vSphere Administrator would manage the underlying infrastructure through the creation of container hosts (VCH), life-cycle management as well as apply any network virtualization and security via NSX. The containers are provisioned as virtual machines, one container = one virtual machine. This allows the containers to be able to have native vSphere capability like vMotion, HA, DRS, and more. I know, I know, I was thinking the same thing: doesn’t that sort-of defeat the purpose of a container? Well, to be honest there are many things to consider including resource usage, packaging, and security. Tackling the idea of resource usage, the container VMs are deployed with a light version of Photon, it’s a minimal OS. From a vSphere Administration perspective I don’t believe the deployment of multiple minimal container VMs is going to eat resources up and more importantly I would assume the main purpose for developers using containers is for packaging and the ability to have an application run anywhere. Even more importantly, the idea that each container runs as a virtual machine, means that we can use NSX to secure the vNICs of each and every VM aka, every container that is deployed. The ability to natively use both vSphere Clustering technology and NSX for networking and security functionality should be a huge benefit! Let’s talk about the VIC components.
What are the VIC Components
VIC has 3 main major components – The Engine, Registry (Harbor), and Management portal. Admiral – which is the management portal, is currently in Beta. I will cover more information on this in future posts after GA. All of these are components are available as free, open source projects. For download information, skip to the bottom of the page.
vSphere Integrated Containers Engine and Registry
The containers has multiple different functions. It allows a vSphere Admin to provide a docker endpoint as a server to the developers. Since the vSphere Admin is providing this functionality, it allows them to control the back end infrastructure. Some benefits would include not having to provision out large compute linux boxes out to developers and again, allows the use of native vSphere functionality like HA, DRS, and virtual networking and security services to the containers. This is much better than provisioning one large compute linux box that is housing multiple containers. Providing all of this functionality, per container, rather than per a few large linux VMs — each with multiple containers.
Included in the engine is a utility, vic-machine, used to install an appliance which runs a secure remote Docker API. The developers or users would then only need the IP, port, and a certificate to access. A good visualization of how the Docker client and vSphere functionality integrates is below.
In the image above, the containers that are deployed are VMs but have similar functionality to containers. Again, they are deployed on a lighter version of Photon OS and designed to be “just enough kernel” for image deployment. The Virtual Container Hosts (VCH) are a logical construct that clusters resources together for deployment of container VMs. You can think of it as a Linux Docker box, but it can span multiple physical ESXi hosts. Each VCH needs to run an endpoint VM which will provide the API to clients. It takes the Docker commands and then converts them to vSphere API calls. The VCH will also provide network setup of the container VMs which is covered in the networking section below, life-cycle management, logging and monitoring. All life-cycle management is performed by the vic-machine. As you can see from the picture above, it lives outside of the VCH and allows creation, deletion, configuration, and certificate management for the VCHs.
The containers registry is essentially a Docker image registry but it does also have additional capabilities like replication and RBAC.
How does networking work in VIC?
When deploying the VHCs you will pick which networks will be available for provisioning. Each network is DVS port group, and from the documentation, it appears that it can also be a VSS port group.
As you can see from the picture, there are 4 different types of networks for the VCHs (Highlighted in Green): Management Network, Public Network, Client Network, and Bridge Network. Docker networks are defined in blue. Unfortunately, there is a limitation of 3 interfaces for the VCH which means you have to combine two or more of the networks shown in green. I am not sure why this limitation exists, but it appears that we are working on removing this limitation so you can split them into 4 interfaces.
- Management Network: Communication between VCH, ESXi, and vCenter Server
- Public Network: Communication of container VMs to the internet.
- Client Network: Networks for developers or users to make API calls
- Bridge Network: Container VM to Container VM communication
Downloading vSphere Integrated Containers
There are two options for downloading VIC, official releases by VMware, and Open Source builds.
Note: Open source builds are not supported by GSS (VMware Global Support), please use official versions to ensure you can get necessary help. Also, support requires Enterprise Plus licensing.
Official Release Download