In part I of this two-part article, our objective was to understand containers – what they are, how they came to be, and how they compare to other technologies. In part II, we will look at containers in the cloud and CaaS, asset management best practices, and licensing considerations. Read Virtual Containers Part I.
Containers in the cloud and CaaS.
Containers are a great fit for the cloud. As covered in part I, containers are much less resource-intensive to run than virtual machines (VMs). This means that containers are less expensive to run in the cloud than VMs.
Financial savings are not the only benefit of running containers in the cloud. The cloud is touted for its scalability and elasticity, however, dynamically scaling traditional Infrastructure as a Service (IaaS) workloads (be it right-sizing the instances or deploying them based on need) is much easier said than done. Containers, on the other hand, were built with this functionality in mind and orchestration makes scaling and meeting demand simple.
Often, when talking about containers in the cloud we hear the term CaaS (Containers as a Service). CaaS could be regarded as a sub-category of IaaS, except we don’t need to manage the OS itself; we are just managing the containers and container runtime.
With IaaS, the cloud provider is providing the infrastructure and we just manage the OS and applications. With CaaS, the cloud provider is also managing the container platform, so we are just managing the containers themselves along with any orchestration.
Public cloud providers including Google, Amazon Web Services (AWS), IBM, and Rackspace all have some type of CaaS offering. They offer the ability to run the entire container platform including registry, orchestration, etc. on the cloud.
IT Asset Management – Virtual Containers
Once containers are in use, they must be managed and licensed properly. Unfortunately, the very features that make containers so technologically compelling also make them difficult to manage.
The first thing that should always be considered when managing IT assets is ‘Trustworthy Data’. You simply cannot manage what you are unaware of. So, before anything else, you must have a way to gather data from your existing physical and virtual infrastructure and ensure that the data is trustworthy. This means putting the proper tools in place as well as processes to check the accuracy and completeness of the data gathered.
So how do we get the reliable data needed to effectively manage containers?
Due to the nature of containers, traditional data collection methods may not work. Physical machines and VMs have an OS installed that supports SSH, WMI, or an agent that information can be gathered from. Containers don’t have this same level of access available so a tool would have to directly interface with the container platform (e.g. Docker) and any orchestrators (e.g., Kubernetes).
However, one positive benefit for ITAM is that containers can have audit trails in place. If using a private registry, that registry contains a master catalog of the container images used in the environment and containers are typically tagged and grouped, which should help identify what is running in the container. Some registries, such as Dockerhub and Azure Container Registry (ACR) keep of log of each deployment of an image as well as who built it; such logs are an invaluable resource for an ITAM manager. The container manifest will also reference any parent or child containers and show how the container is being built which can be used to determine what software is running inside.
People & Processes
During the annual Docker conference in 2017, the keynote speech included the following hypothetical situation:
Two engineers come back from vacation to discover that they need to quickly stand up an application that also requires an Oracle Database. Using containers, they quickly deploy the Oracle database by downloading the container image from the official Oracle Container Registry and turning what would normally be an onerous and multi-day endeavor into a process that only takes a few minutes.
While the scenario highlights the benefits of containers it also shows how an incredibly powerful benefit could quickly become a nightmare if the proper policies and procedures are not in place.
Containers lower the barrier to entry in deploying enterprise-grade software. One consequence of this is that it becomes much easier for admins and others, who may not be aware of the cost and licensing implications, to install costly commercial software. Proper policies and procedures can greatly mitigate such risks. It is therefore crucial that these policies and procedures not only exist but that employees are educated and trained accordingly so that such a mistake does not happen.
Virtual Containers Licensing Considerations
Licensing can also be complicated by running applications in containers. In this section we will look at specific licensing considerations for Microsoft, Oracle, and IBM.
Licensing can also be complicated by running applications within containers. As mentioned, containers are typically ephemeral. This means if you start a container, stop, and restart it, the first running instance is not the same as the second running instance. It’s analogous to being handed a physical server, throwing it away, and starting a brand-new physical server with the exact same configuration. This is not an issue with concurrent licensing, but for any licenses that are specific to the instance it is running on with re-assignment rules, you’d have to carefully consider how the licensing applied if they don’t have container-specific language.
A Dockerfile (which is used to build a container image) may be offered under a permissive open-source license, however that is the license for the Dockerfile itself, not necessarily the software that is running within the container. The software provided within the container may be licensed under a different, or even incompatible, license.
It’s also important to note that the running container is not the entire picture. The software within each container image layer is still being distributed even if it does not end up running in the resulting container. To illustrate this point:
We start with a base container image with NGINX, Python 3.7 and zlib however we then have an additional layer that removes zlib and modifies the Python install, upgrading it to 3.8. We then have an additional layer which includes OpenSSL resulting in our end-view, what we actually see running which is NGINX, Python 3.8 and OpenSSL. The issue is in this scenario, the licensing for the software in each layer still needs to be adhered to including Python 3.7 which was modified/updated and zlib which was removed as this software was still being distributed.
Containers and the ISO ITAM Standards
While the specifics of containers may make management more challenging, the ISO standards support their use-case. Containers and the software running within them are IT Assets and can be effectively managed by an ISO/IEC 19770-1 ITAM management system.
Any Software ID aka SWID tag (ISO/IEC 19770-2) or SPDX tags (ISO/IEC 5962) should also be distributed with the container and could be ingested by tools to help identify the software running within. As discussed however, care should be taken to ensure that the tags for each container image layer are ingested and considered, not just the running container itself.
While the container platform providers have not adopted the standard, containers are an ideal use case for ISO/IEC 19770-4 which is the ISO standard for Resource Utilization Measurement (RUM) which is a specification that aims to provides usage information for an IT Asset including the amount of time a given IT Asset was in use and when amongst other details, this data is critical for an ITAM team.
Publisher-Specific Licensing Considerations
Microsoft Virtual Containers
Microsoft has two different kinds of containers: Windows Server containers and Hyper-V containers.
Windows Server containers provide application isolation through a process and namespace isolations technology. Like traditional Linux containers, Windows Server containers share their kernel (the most core instructions/functions of an Operating System) with the container host OS. However, because they share the kernel, these containers require the same kernel version and configuration. From a licensing standpoint, you can run unlimited Windows Server containers without additional licensing considerations.
To date, Hyper-V containers have been essentially optimized virtual machines. The kernel of Hyper-V containers is not shared with the host, meaning that the configurations and versions do not need to match. However, these containers will be much larger. Because the containers are redistributing the OS kernel, the container OS must be licensed. From a licensing standpoint, Microsoft treats these containers as if they were VMs. This means that licensing the container OS is straightforward for those who are already familiar with licensing Windows Servers on VMs: Once the physical host’s cores have been licensed, Windows Server Standard can cover up to two containers (and be stacked multiple times to cover additional containers) while Windows Server Datacenter can cover an unlimited number of containers running on that host.
Microsoft also has two Windows Server operating system editions that are commonly used in containers – Windows Server Core and Windows Server Nano Server. These editions are ideal for containers as the OS has a smaller file size. Windows Nano Server in particular is designed for scenarios that require “fewer patches, faster restarts, and tighter security”. Because Windows Server 2016 Nano Server receives updates using the ‘Semi-Annual Channel’, Software Assurance (SA) is required on both the server licenses as well as the CALs (Client Access Licenses).
Just like Windows Server operating systems, Microsoft treats containers like VMs when licensing applications. Here’s an example: when licensing SQL Server within containers, like VMs, you can license the subset of CPU cores that are dedicated to that container rather than licensing all of the physical cores supporting the container platform. Additionally, if all of the host cores are licensed with SQL Server Enterprise w/ SA, an unlimited number of SQL Server containers can be run. Keep in mind, however, that unlike VMs, where we explicitly assign CPU, RAM, etc., containers will, by default, use all resources available unless specifically configured otherwise and you have to go out of your way to explicitly define how many resources a container can use.
Oracle Virtual Containers
Oracle has more of a hardline approach with its products and virtualization technologies. Oracle has a ‘partitioning policy’ with listed supported technologies, which means that when we are using a supported technology, then we only need to pay the licensing costs associated with the CPUs supporting those partitioned workloads. This is referred to as ‘hard partitioning’. Everything else falls into the definition of ‘soft partitioning’.
Containers, such as Docker, are considered ‘soft virtualization’ by Oracle. Should an Oracle product be deployed within a container, all physical infrastructure that sits underneath that container – including all servers within the cluster, farm, etc. – must be licensed. There is one exception to this: versions 9.x and up of Oracle 10 Solaris containers, also known as Solaris Zones, are recognized as a ‘hard partition’ technology if they are ‘capped zones’.
While it’s not possible to license the subset of CPUs supporting a workload with Docker, once the infrastructure is licensed, we can run an unlimited number of container instances on it without additional licensing considerations. And because containers are lighter weight than VMs, we can also typically run more of them.
IBM Virtual Containers
IBM has taken a very similar route as Microsoft in that it treats containers like it treats VMs. This is great for those already familiar with IBM’s licensing models, especially its PVU metric-based licensing which allows customers to take advantage of subcapacity licensing. However, there is some specific guidance from IBM on ensuring eligibility for PVU licensing:
“Docker is not a sub-capacity eligible virtualization, but it can be used in combination with a sub-capacity virtualization. … Apart from discovering IBM software that is installed in Docker containers, License Metric Tool also reports its license metric utilization. When the Docker is deployed on a physical host, license metric utilization is calculated on the level of the host. When it is deployed on a virtual machine, utilization is calculated on the level of the virtual machine.”
This means that if we deployed containers on a physical host, we would need to license the PVU equivalent for all the host’s cores, regardless of how many were accessible to the container. However, if we deployed the container runtime in a VM using virtualization that is eligible for sub-capacity licensing, then we would only need to license the cores assigned to the VM. This also requires ILMT or BigFix clients on the host or VM. ILMT added Docker software scanning support in December 2017 and it is available from 9.2.5 onward.
Containers are only going to continue to grow in usage and popularity and that’s having a solid grasp on how they operate is required to effectively manage them. Containers, cloud-based services, and other serverless functions necessitate the need for a mature SAM and ITAM program, processes, and reporting. It also increases the need for tools and solutions that can provide real-time actionable information to optimize costs and mitigate security risks. If you have any questions regarding SAM and containers, reach out to us and we’ll be happy to help.