Virtual Containers: asset management best practices and licensing considerations
Containers are seeing tremendous adoption rates and growth industry-wide, yet it’s also clear that it’s a black-hole for our clients in terms of IT asset management. Unfortunately, there’s very little information about managing virtual containers or how to address the emerging SAM & ITAM challenges, including licensing.
Due to this lack of public information, we have decided to publish some of the things we have learned about navigating the world of virtual containers, with an emphasis on asset management and licensing.
In this article we will define what a container is, cover the different technologies that make up the container ecosystem, talk about what makes containers so attractive, discuss the specific challenges related to managing container assets and more.
What is a Container? As containers continue to evolve, we have discovered an interesting trend where many people conflate the term with multiple technologies that make up the container ecosystem. So, to shed some light and avoid that pitfall, let’s look at what a modern container is at the most fundamental level.
On the left we have an Operating System and it has several different processes (applications) installed that are running. These processes are all installed in the same environment, or namespaces if you’re talking about Linux, and can interact with each-other etc. A container is simply isolating a single process and wrapping it up in – just as it sounds – a container. This container is isolated from the host-operating system and can only “see” and interact with what is explicitly allowed.
Let’s look at the traditionally model where we are installing applications on the OS: it’s running a process like a NGINX Web Server, but there are also a number of dependencies installed that support the running of our main application. As a hypothetical, let’s say we also want to install NodeJS which requires some of the same dependencies but a different versions of those dependencies, well now we have to go through some complicated configuration to make sure our top-level applications are pointing to the right versions of the dependencies and also let’s hope that when we update our application or dependencies that those configuration changes hold.
But with Containers, we not only have the process in the containers but also any dependencies it relies on. It’s all bundled in this one simple container, so we don’t have to worry about version conflicts as it’s all isolated, and nothing is actually installed on the OS, you just run the container, and when you delete or stop the container it’s all gone. This is especially useful when developing applications because someone might be developing on a laptop, then testing on a server, then deploying to the cloud or a co-worker’s desktop and all these environments are likely different, they installed a different version of a dependency or the hardware configuration is slightly different which means additional troubleshooting efforts. With containers, it’s platform-agnostic as it is abstracting these layers. You can run it on a laptop, server or the cloud. It doesn’t matter, it’s going run the same. In the traditional model, migrating an application from on-premises to the cloud or across cloud platforms is an onerous process however with containers, this process is streamlined and overall greatly-simplified.