All right. Welcome everyone, and thanks for joining us today for our Virtual Containers – ITAM Best Practices and Licensing Considerations webinar. As always, we have a couple housekeeping items to take care of before we get started. First, we will be recording this session and it will be distributed at the conclusion of this webinar in a follow up email.
So be looking out for that. And lastly, we want to encourage participation throughout the webinar. All questions will be anonymous, so if you have any during the presentation, please type them into the question and answer box at the bottom of your screen. If we don’t get to your question immediately, we will address it at the end during the question answer portion of the presentation.
That wraps up the announcements. We hope you enjoy the webinar and I’ll hand it over to Ben Wilcox to introduce Anglepoint.
Hey, thanks Alex. Welcome to everybody. Big welcome to everybody really for joining us today and for this webinar. We’re pretty excited about it. It’s a continuation of our series and we’re going to be able to delve deep into virtual containers today. My name is Ben Wilcox. I’m part of a business development team.
At Anglepoint, I focus on working with customers and partners on developing client-specific solutions, understanding clients’ issues, challenges, and matching our expertise to achieve their goals. So that’s my role, but a little bit about an angle point. We’re a 10-year-old company.
Built on dedicated expertise in the software asset management ITAM space. Many of our team bring experience from big four consultancy or from major publishers. And this allows us to bring deep expertise around software licensing around project management. And other aspects around the it to, to our customers to meet the objectives.
We work with many Fortune 500 companies as the previous slide noted. Our team, our services are really focused around software licensing, software asset management, and as this slide highlights, we have multiple practice areas from single publisher issues, delivered through our software publisher and licensing team.
SAM Tooling, SAM program governance, SAM Managed Services and our expanding groups around IT, security, and certified training. So we have multiple areas that we can bring expertise to work with our clients on managing their various challenges in this space. A real aim is to assist clients, in establishing a vision around software asset management, IT asset management, achieving long-term objectives, and really helping them strategic drive strategic decisions. Mitigating certain risks, but also containing or reducing costs. IT environments are, continue to shift. They’ve evolved significantly over the last several years with virtualization and cloud solutions.
Other technologies will come into play. Really these solutions are allowing organizations to drive efficiencies, scalability, and flexibility. But it influences the challenges that our customers and it influences the scope of services we deliver. So today we have the opportunity really to delve into virtual containers.
And this is an area that, from a cloud standpoint and virtualization plays into IT asset management and software asset management. Let me introduce Trent Allgood. He’s a principal consultant who’s going to provide a fantastic overview of virtual containers. Trent is a graduate from BYU. He joined Anglepoint in 2012 and has been an integral part to our services delivery.
Working with many customers. He’s got a wide range of experience from single publisher licensing. SAM Technologies, ISO Standard Frameworks, and Cloud Migration. He’s also a member of the ISO Working Group focused on the 19770 ITAM standards. So he is, he’s involved from an industry standpoint, and I think he’s going to be able to share a great deal of knowledge with everybody today.
So I hope you enjoy the webinar and please, as Alex mentioned, please do not hesitate to ask questions. We’ll have a dedicated time for them at the tail end and following, when you think about it we’ll remind you, but following this webinar, if you have follow up questions, you can reach out to us and we’d love to connect with you.
Trent, it’s all yours.
Hey, thank you Ben. So as a brief agenda of what we’ll go over today we’ll first talk about the technologies that support the container ecosystem. I think it’s important to have a certain level of depth of understanding how the different container technologies work together in order to understand the advantages, not just from a technological standpoint, but from an IT asset management perspective, as well as the particular challenges you would have as an IT asset manager in managing the software and the IT assets themselves at our containers as well as understanding the.
Advantages of containers, why they’re becoming such a popular solution both on premises and the cloud. And we’ll talk about the specific challenges to containers with licensing and its management, and then also talk about some of the specific licensing considerations with some of the larger publishers such as Microsoft, Oracle, and IBM.
So I think it’s actually interesting that we are talking about what is a container in 2019 when containers have been around for over a decade. But there, there’s actually a good reason for this even though containers. Have only been around or have been around for over a decade. It’s really only within the last few years that they’ve been a viable solution for enterprise organizations who require a certain level of SLA and stability.
Again, the beginnings of containers, though really do date back to 2008 Alexy or Lennox containers. And it’s important to note that the code base, it’s an open source and Lennox based solution. So the container code base is still being contributed to and is largely open source. It wasn’t until.
2013 though that Docker entered the market with really enterprise usage in mind. And it, in 2014, Docker actually open sourced its own code, which is still available today. And then in 2015, Google open source Kubernetes, which it had been using internally as well as a variation of LXC and. It made that open source.
Also another important piece of history here was that Docker, Google and others actually joined what was called the created and called what’s called the CNCF or the Cloud Native Computing Foundation which is actually creating the standards for containers and the orchestration to ensure that no matter what technology you were using for containers, it had a similar feel and function.
So that they to allow for cross compatibility. So in, again, in 2016, Microsoft, VMware, IBM, Amazon, they all joined CNCF as well. And in 2017 they actually made good on some of the commitments they had made and created container infrastructure support from their top tier cloud services. So that’s again, why we can still talk about what is a container today because it really wasn’t until 2017 that it’s really considered viable for enterprise usage.
So let’s talk what is a container? Now, if we look at the, this, here’s, we have a traditional model. So this could be a VM, a physical machine, a cloud, I as instance, it doesn’t really matter. But we’re just looking at the traditional model. We have an operating system installed and as well as there are installed a number of individual different processes.
These processes can interact with each other. This could be a program such as, SQL Server, Microsoft Office whatever. And what a container really is in theory though, is just putting a box around an individual process and isolating it. And so when you do that, again, it is isolated it’s not able to see any other process unless explicitly allowed.
So in the Linux world, this is called name spacing, where you’re telling it exactly what folders it can see opening ports individually and defining exactly what is seen in that container. So it’s an isolated process. If we actually open up one of these containers and look inside. So as an example, we have Engine X, which is a popular web server like I IIs or Apache.
Not only does it include the software process, but it’ll also include and package any of its dependencies. So here’s an example engine X that requires open SSL and zli and PCR E and how we’re using it. And so it actually bundles all these into a single unit or container. And this is. Actually fairly revolutionary in terms of what the implications are because nothing is actually installed per se like you would a traditional operating system.
It’s solely when you start it, when a container runs it, it’s just that it’s up and running, it’s alive. It’s an individual unique instance as well. And the. It’s also, again, isolated from any of these other processes A common problem in it. Let’s say that we actually want to run two different applications and a traditional model, but actually the dependencies conflict.
For instance our Engine X requires open SSL 1.0 but we’re also running a node instance that requires open SSL 1.1. In the traditional model that’s very complicated to set up you have to. Change the configuration files, setting files, and if you ever upgrade, good luck making sure that confliction doesn’t revert.
It can cause a lot of issues. However, again, with containers, because it’s independently isolated there is no conflict we can run. Multiple instances of a container. And there simply isn’t that sort of configuration issue. So it creates a standard sandbox as well as it means that it’s platform agnostic.
It doesn’t ma when you run a container, it will run the same on a laptop in the cloud, on a VM, on premises. It doesn’t really matter in the traditional development world. Another common problem is, how you’ve set up your environment, the individual dependencies, ensuring it matches exactly with test and staging and production.
But that problem is ameliorated greatly with containers. If you ever are hearing your organization talk about DevOps or microservices Again, containers are a natural fit philosophically and technologically with those models because of this isolation use case and sandboxing.
Another thing that’s unique about containers is, unlike a traditional VM, which is typically running 24 7, the container typically lives and dies with the process. If as soon as you spin up that application, that container is brought to life and it is, again, it’s a unique instance of that container.
If you spin up 10 containers, shut them down and spin up another 10 containers, there’s really no persistence typically across the, those containers themselves. You can look at them as unique instances. And so containers are typically stateless or at least best work with stateless applications where you’re not typically saving data or storing data.
Now you can actually set them up to be persistent that they can store data and that can live across the container lifecycle. But by and large though, you’re using them with, at least today with stateless applications. So if we actually look at the average container lifetime this chart here, we can see that the container lifetime average is two days.
Again, much different than a VM where you can have individual sorry, I was just looking at question and to see that there’s a, Adam said there’s a horrible scratching sound. Does that still an issue or is. No longer hearing that.
Still hearing it. Is anyone besides Adam hearing that okay. Looks like Adam, that might be on your end. Unfortunately. You might try dropping and rejoining. I know when we did some testing earlier before we hadn’t had any issues.
So when we took a look at this chart here, the average container is spins, light lives for two days and it actually breaks it out further. So an orchestrated container means that it’s being managed by an orchestrator, and we’ll actually talk a little bit later about what that does. So when it is managed and it, and orchestrated that lifespan reduces you further to half of a day.
Whereas when it’s unorchestrated, it’s probably li being used more so like a traditional VM. And so it’s living for a much longer period of time. But it’s interesting that it, this offers some unique use cases. So Google for instance, they, again, they’re the ones who opensource and created Kubernetes, which is that orchestrator, they spin up and kill 2 billion containers every single week.
When you spin up an instance of YouTube or Google search, that’s actually a containerized. Process. And as soon as you close out of it, that container then goes away and they’re able to again, dynamically scale and manage the, their use because of containers. And you can imagine how, those container life cycles are especially short given that they’re tied to those web applications which don’t have a long lifespan.
So comparing the, let’s compare again, containers to VMs. I think that’s a worthwhile exercise. So on this bottom here, we have our infrastructure the host operating system, our hypervisor when we’re talking about VM. And then we’re installing the guest OS on each VM. And then we have our applications and dependencies installed as well, whereas comparing to a container we, again, have our same infrastructure again, this can be an on-premises server.
This could be an individual VM Could be a cloud. In a cloud instance. Then we have the host OS again, and then the container run time. But we don’t have to worry about installing the operating system on top of this because containers, they actually share the.
Internal or the core of the op system. Again, we’re not only saving space, for instance, a Windows server. OS is probably 30 gigs. So we’re actually saving, 90 gigabytes here in, in this instance. So we’re not only saving space though, but we’re also saving compute resources.
We don’t have to worry about virtualizing the. The RAM or the CPU or other aspects that are actually being emulated in a traditional VM setting. So again, not only are they leaner from a size perspective, but they’re also very lighter, leaner from a compute perspective as well. Which means that they can spin up faster, they can start and stop faster.
They’re smaller. You can run more of them on a dedicated instance than you could a traditional VM.
So let’s look at some of the different technologies and how they interact within the container ecosystem. So again, we have our container, that package standalone unit of software along with its dependencies we talked about. And then you also have what’s called a container image. And essentially container.
Image is basically the static binaries. It becomes a container when it’s running, but this is what sort of the, a container in its off state. And it’s important to distinguish because there’s actually a hierarchical nature to container images and their containers images are built. In a sort of like a get like semantics.
It has push pull commits where your writing changes or pulling changes and then making changes and committing those, and that creates a natural audit trail. Which, thinking down the line as a, as an ITAM manager can be very helpful. But you can, it actually tracks the changes over time.
And each of these container images are immutable. So whenever you make a change to the container image, it actually creates a new version or what’s called a layer in the container language. And then you run that container image, which becomes the container. You also have a container manifest.
This is Docker file in the docker world. And again we’re going to be mainly referencing Docker, which is again, that container platform and Kubernetes, which is that orchestrator because they have a beyond dominating market share. There really aren’t many solutions other than Docker and Kubernetes.
So we will be heavily referencing them for our examples. And the container manifests though, this is the instructions that build the container. And again, I mentioned that hierarchical model, so you can actually create a parent and child containers and you do that doing through this manifest.
So in our previous example, we had that engine X container, that web server. Now let’s say we actually also wanted to have PHP installed on it as well. Instead of just building a brand new container image and including engine X as well as PHP, we can actually create a new docker file or container manifest and then say and use that engine X container as our image, as our parent image.
And then now in our manifest will include PHP. In addition to that parent. And this is useful because think about. Again with updates. Let’s say that, oh no, heart bleed, just hit open. SSL 1.0 is susceptible and we need to upgrade all of our applications to open SSL 1.1. In the traditional.
VM physical server model. That’s a fairly onerous process. Let’s say we have a hundred different servers that we now have to patch. Hopefully we have an automated way of doing so. But even if that’s true some of those will fail. We have to go verify. Again, it’s a very onerous process. However, with containers, You just need to update that one container image with the new version, and it doesn’t matter if it’s what you’re running.
One instance, a hundred instances, 2 billion instances, they’re all updated automatically and again with this hierarchical nature, any child container images are also updated. With that change, if you change the parent image again, from a development standpoint and management standpoint, this is pretty revolutionary in terms of just even just updating applications and managing the standardized software.
And then finally on this, we have our container registry. So this is the repository or catalog of container images. So there is, there are public. Catalog such as Docker hub, you can pull official Docker images and organizations such as Oracle, Microsoft, they have official Docker images they’ve published that you can pull.
But you can also have an internal private repository where you’re creating your own container images and updating them. And now some other benefits from this container registry are the, again, because we’re using a Git model of the container images and this container registry there’s again this audit trail of not only the changes being made to the containers, but who made those changes?
Who’s the owner of this container and you, excuse me, even some deployment information. So again, from a, from an IT asset management standpoint, from an audit standpoint, there’s this natural audit trail that’s occurring through this very life cycle. There’s also tagging that can occur. You would tag your images and then most organizations would do that as some sort of application level, because containers are built around specific applications or processes.
So there’s another question. What software sits between the OS and the container to make it OS agnostic? So it is the container runtime that’s standard. So again, that would be Docker, for instance. Docker is handling all of the intelligence that’s making it agnostic across environments and even operating system to an extent.
So some continued terms that we should know about how the container ecosystem works. So when we talk about node, this is a single unit of computing hardware. So this could be a physical server, a VM, on premises or in the cloud et cetera. And then when you actually group these CL nodes together, this makes a cluster.
So these are the resources that are pulled together to support the workload. So when you actually spin up a container, it can land on it basically be spun up on any of these nodes within the cluster. It’s a part of, and again, from a licensing perspective, that may have certain implications or something to be aware of.
There’s also the term pod, which is simply what actually orchestrators manage. They don’t manage containers directly. They manage pods. So you could actually have either you could have a single container in a pod that’s being managed, or you could have multiple containers that are being managed by the orchestrator.
And then again, there’s the orchestrator, which is what. Is actually the, can be the intelligence behind to dynamically and automatically start up or stop container processes based off of availability or need or demand. It also does things like self-healing. So if there’s a.
Sorry, I think I was just having some audio issues there. I should be back. Okay. Thank you. I think my headset battery might be low. Which I did charge it beforehand. So we have the orchestrator, which can not only do things like starting and stopping services based off of demand, but also it can self-heal.
So if we have some problems with our applications and the processor being terminated early, it’ll actually automatically amend that. Also it does things like failover automation. So this is how Google. Is able to dynamically stop and start 2 billion containers every week to meet demand across a global infrastructure is through this orchestration.
It really does allow for that scalability and elasticity that the cloud promises and semi delivers on based off of use. It was really built around this premise of meaning, this dynamic demand.
So let’s talk about the benefits of containers real quick. So they’re, again, they’re resource efficient allows for scalability and elasticity. It’s reliable platform agnostic. It makes deploying applications and develop being of applications much easier because it’s offering that consistent sandbox.
And again, they’re lightweight, reliable, and secure. So here we can stop for a second before we move on between the, to the IT asset management portion are there any questions about the sort of the containers from a technological perspective before we move on to the IT asset management side?
So if we actually look at today what’s running in containers, here are the 12 most popular pieces of software, and you’ll notice that these are all open source. Again, because containers came from the Linux world a lot of them are still using those open-source Linux based applications. And these are also, if you look mostly Backend or front end, the web services which I think is interesting.
However, and to some extent with open source applications, it doesn’t really matter if I’m running one or a billion containers. All I have to worry about is that won’t change the attribution requirements. So for open source software, There may not be as many implications depending on the use rights.
However, for commercial applications, there certainly are, and we are seeing a greater push from, for commercial applications within containers. That’s something that we’re seeing push being pushed from the actual software publishers themselves as well as use from customers. And we’ll talk a little bit more about that later.
From a, looking at the actual adoption of this docker specifically I think this is interesting to see that at least as, as of 20 15, 20 18 25% of companies have adopted Docker in some manner, either in non-production or production environments. There’s actually a current projection by Gartner that puts by 20, 20, 50% of organizations will be running containers in their environment.
Either in a non-production or production capacity. Also, within organizations that have already adopted container usage that deployment size has increased. So between 2017 and 2018 organizations that have been using containers increase their footprint by 75%. And again we see this trend continuing.
If we actually look at this from a revenue perspective again in 2019 containers are, will be supporting 2 billion worth of revenue applications in by 2020 2.7 billion. So we’re not only seeing container usage increase, but we’re also seeing it increase year over year. And it does make sense because the, while containers again, traditionally may not make sense for hosting hugely monolithic applications.
It does make sense again for these stop and start applications to meet demand. We have a question. Are these containers end use or hosted? So I think we’ll answer that in a few of these slides. So when we ac when asked organizations would actually, they’re deploying containers on so interesting.
They’re actually some organizations or large portion of organizations actually deploying these on VMs themselves, which to an extent. Almost doesn’t make sense because you are adding another layer of abstraction and compute. So you’re now, you are not only emulating the CPU from the VM, but you’re also now abstracting that further by putting a VM on top or the container environment on top of it.
But also it makes sense if you know you’re in the early stages of container adoption, you probably can’t dedicate an entire bare metal server or rack to containers while you’re dipping your toes and getting your feet wet in their use. So I imagine that this will change over time. Again, a large also use within public and private clouds, which I think is where containers really do shine.
So here’s an illustration, again, using our other view of the, again, we have our hypervisor guest to us, and then a container on time on top of it. And then we have a, do you intend to replace virtual machines with containers? And about 50%? Yes. 50%, no. I think this is also interesting because I actually like this, that this is the response because containers are not the end all, be all killer of VMs.
VMs still largely make sense where you do need to be running an application 24 7. There’s not going to be much in the way of startups start down dynamically scaling. Yeah. Again, VMs make are, have advantages there that containers don’t. However, again, there are other use cases where containers do make far more sense especially those web-based applications, those on demand use cases where that scalability and dynamicism is needed.
So not only are, containers offered as a cloud solution, but they have a specific offering. So you may hear the term ca and wonder where that fits in or containers of the service. I wonder where that fits in with traditional IaaS and past solutions and. It may be really considered as in the middle.
It is a subcategory of IaaS that infrastructure is still being served. However instead of just being served the OS and not having manage v virtualization or anything beneath that you are just handed the container runtime that docker instance and can deploy your containers from there.
Whereas PAs, you’re really just being handed the software directly and managing the software and its relative data. So it really does sit in the middle of, between PaaS and IaaS. And again, Azure, IBM, AWS. Google, OpenStack, they all offer end-to-end container solutions. You can run not only your docker instances and containers in the cloud, but you can also run your registry with Azure Registry Services and Azure Kubernetes services.
You can also do your orchestration in the cloud and really have an end-to-end cloud solution for your container stack. OpenStack, which is cloud rack, they’re probably the only ones who have. Or sorry, Rackspace. Really the only ones who have a competing solution with the traditional Docker, Kubernetes one, two punch.
They’re pushing what’s called an Apache Mesos which is. Very similar to containers, has a few unique features that push it more into the serverless space. So we might be hearing more about them in the next upcoming years. But again, right now the market share is 95 plus percent docker and 100% Kubernetes.
It’s interesting when you look at the top five different Orchestrators. It’s Kubernetes, because the other four are just offshoots from Kubernetes. Again, because it’s open source and you can easily create a fork of the code.
So from an IT asset management standpoint, again, we’re probably thinking, dynamically or Managing at something like a container that can spin up and spin down in a manner of minutes. 2 billion instances being spun up over a week period. There’s obviously some inherent difficulties with managing containers.
But even beyond that, again there’s nothing is actually installed on the container. It’s just sitting there and then once that container lives, it is running. And it can’t be scanned via traditional methods. If it’s a Windows container and you can’t connect to it with WMI, it’s not AD joined with Linux-based containers, you wouldn’t be able to SSH into it or scan it via traditional methods.
And also the, that container image, which is sitting on the, whatever note it’s sitting on, if you scanned that, you actually may misinterpret it as being installed on that host. Rather than being part of a container image that can be running any number of instances. So this is a particular challenge today, especially when we look at there really is not support, especially from a software discovery perspective, from traditional SAM and ICAM tools.
ServiceNow actually does offer some. Basic ITAM capabilities of tracking the use this, the lifecycle of containers. But by and large you’re out of luck from a discovery standpoint with a few exceptions with containers. IBM’s ILMT and Big Fix actually does, is able to scan containers for or docker containers for IBM software.
If it has the big fix or ILMT agent on the host but beyond that it’s, again, software discovery is something that largely doesn’t exist and we’re waiting for tool vendors to, to catch up on. But it, again, it isn’t all bad. And in some cases it’s a double edged sword. When we talk about the ease of deployment, again, this is going to make updating software easier.
Even getting rid of deploying software easier, which again, from an IT asset management standpoint is great. But also this ease of deployment makes mistakes easier. So for example, in the Dacon 2017 keynote speech, Oracle had just been onboarded as a docker. Partner. They had just released their docker images.
And so part of this keynote speech, they talked about this hypothetical scenario where two engineers went on vacation, came back, the, and the company had actually been acquired and was now required by their new owning company to spin up a, an application which required Oracle Enterprise Database.
And not only that, they only had a week to do it. And so in the scenario they, they scrambled and then determined, oh, we can actually just pull an Oracle DB container and spin it up. And so they had an enterprise grade, enter Oracle database software running within five minutes. Now that’s great.
That’s amazing from a technology standpoint, but. Also, let’s think about the licensing implications of that. They just spun up a hundred thousand dollars plus worth of software potentially in a manner of five minutes with little checks and balances and oversight. So it really does, because it’s so much easier now to, it lowers the barrier to entry.
For this commercial software, we have to be far more vigilant about ensuring that the proper policies and procedures are in place as well as training. Our developers, our IT admins don’t do that. Because they have such EAs easy access to this software now. So it really does demand mature SAM processes and.
Procedures in place. But the good news is, again, as part of the ecosystem, this container technology stack is again, that audit trail. You not only have the registry, which is keeping track of who’s pushing, polling who’s creating these changes, who created these containers. I can’t tell you the amount of times that we’ve gone to a customer’s organization and we find.
SQL servers that they didn’t know about, licensable SQL instances. And the first thing we say is, okay, who owns these? Let’s go see if they actually need them. And crickets, they don’t know who owns them. So even just having that piece of information is incredibly valuable. Also the container manifests itself should give again, reference either the parent container or should itself reference typically reference the application that it’s built to support.
Again, because containers are all, are typically built around a single application, that actually does make discovery a bit easier. It’s not like we’re going to have. 10 different things installed like you would a VM and have to worry about that. It’s really all living around this, that single application.
It’s built for also within the registry. And these container images are tagged and hopefully within your organization, you’ve tagged them in such a way that’s helping you with this process. Or you, if not, you could go about that. You could create tags and say, hey, this is we’re creating a, an engine X tag or an open source tag, or, However you want to do that, as you can easily group and manage these containers.
But you will have to pull data from the, not only the container runtime in terms of actually how these are being used and spun up, but also the container orchestrator. You really do have to go to both sources. To get this, that, that use information, could you make a dash two ID tag part of the manifest?
So the dash two. And for those who may not, sorry, do you know what he’s talking about? The ISO 19770-2 standard. That’s the software identify software identification or SW tag. Those are actually typically created and pushed by the manufacturer. So it would be, or the software publisher.
So it would be the software publisher who would, who is supposed to include that in the image, and hopefully they do. However, if not you could certainly create your own dash two tag. It just wouldn’t be signed. And create that as part of the manifest. So you could do that. One interesting thing about the dash two tag is, and containers in particular, is that keep in mind, each container is a unique instance.
So if I stop a container and start it again, and then stop and start it again, each one of those is a sort of its own, again, life, its own instance. It’s unique, and so you’d actually be creating a dash two tag every single time in that. Google example of 2 billion containers a week that’s creating 2 billion TA dash two tags every sing, every week that are compounding.
Which is a, an interesting thing about how the dash two works in reality versus How it was intended to work. By and large, the ISO standards didn’t need or were built to account for this use case. You could still bring in those dash two tags to from an inventory perspective, although you would need some intelligence behind that as well to.
Think about the actual, how this is being used in take to account the amount of time it’s running as well, which the dash two tag really wouldn’t have. That would be more of the rum tag or dash four tag that resource utilization metric tag, which would give you more of that information.
So again, by itself, the dash two tag is helpful, but not everything you would need because it really wouldn’t talk about the life cycle of that container.
Not a problem. And let me know if after there are some additional questions on that. It’s an interesting implication. I think one of the main things that. From this discussion of how to manage containers, how to manage these sorts of upcoming serverless applications. The main takeaway for me is we need not only mature software asset management solutions and processes in place, but we also need more real time data.
It’s no longer good enough to have especially also with, software as a service or IaaS. Where you’re actually being billed on per minute. Per hour. It’s no longer good enough to have quarterly reports looking retroactively because by then you may have accidentally, again, deployed a hundred thousand dollars worth of software accidentally or gone over your IaaS budget.
We really do need more real time solutions which is what we’re trying to help comp companies implement and create. Although again, from a tool perspective that can be, we’re fairly limited. So let’s talk about some specifics from a licensing perspective. We have about 15 minutes left, but also if there’s any questions, I’m willing to stay over after the fact.
So Microsoft has two different types of containers. They have.
All right, I’m going to hardwire in apologies. So Microsoft’s has two different types of containers. They have the. Windows server containers are just WSC and these are the traditional, just sort of Linux containers. They’re sharing the Kernel OS. So you can actually run as many of these as you’d like without any further licensing implications.
Sort of the caviar catch here is that you, the. Container OS that is expecting has match exactly the host kernel OS. So if you’re building a Windows server 2008 R two container, it can only be deployed on a Windows server 2008 R two host. But again, the advantage of these is you can run any am any amount with no additional licensee requirements.
HyperV containers have the benefit of, it doesn’t matter. I can run a Windows server 2008 container on a Windows server 2016 or whatever. But there are additional licensing requirements. You do have to basically license each of those container operating systems. Luckily, Microsoft, by and large, does actually treat containers like VMs.
So if Microsoft VM licensing already, then you pretty much automatically know the container licensing. So with Windows server standard license can cover up to two VMs. Data center can cover unlimited VMs like that. And again, with applications that holds true as well. Basically, with SQL Server as an example, just like VMs, there are, there’s four core minimum.
You can, once you license the entire host, you can run a limited number with SQL Server Enterprise. One thing to be cautious about though is that again, when you’re spinning up a container, it can come up on any node in that cluster. You don’t dictate that. And then also with VMs, you are explicitly configuring each VM and how, and assigning, hey, I want this VM to have two virtual cores.
I want this VM to have eight virtual cores, but with containers, by default, it’s using all resources available, and you’d actually have to explicitly restrict it, which you can do. But that’s an explicit procedure you’d have to make. So by default, it’s going to use everything available to it. So that, keep that in mind.
Also with Windows, server containers there are two different operating systems that Microsoft has created that sort of. Are probably most commonly used. So Windows Server Core or Windows Server Nano, because they are stripped down smaller versions, that’s what you’d want to be using in containers.
However, with Windows server nano, because that’s offered only via the semi-annual update channel, you’d have to have software assurance on both the OS as well as the cows that were accessing or interacting with that. So again, just be conscientious of some of the, again, Microsoft actually makes things fairly easy overall, but you do have to be conscientious about some of these differing factors with containers
with Oracle. As you probably know, if you’re any, have any familiarity with Oracle licensing, Oracle pricks a takes a pretty hard-line approach with virtualization. And unsurprisingly, dockers considered soft partitioning as opposed to hard partitioning. With hard partitioning, you’re able to use a subset of the environment to only license that subset.
However, a soft partitioning, which again, docker is, you basically have to license the entire infrastructure that’s supporting that container. So again, that would be every single node within that cluster you’d have to license. I think the, perhaps the good news though is that once you have licensed that entire infrastructure you can run typically more containers than you would be able to VMs.
So you could still get more bang for your buck that way. There is an exception. The Oracle specific Solaris 10 containers can be configured as cap zones. And in that instance, you could actually, again, license the subset. But with Docker itself that’s not a possibility. You’re always licensing the entire infrastructure, supporting those containers.
So again, in that keynote speech example we gave they. They deployed that container and they’d have to license every single node in that cluster. They deployed it on. So you have another question for the vendor provided containers. Example, Oracle does a vendor have an audit trail that you can download the license container.
So the vendor itself would not have a would not have visibility into what you’ve pulled from the container registry, at least from, not from a docker hub. The, you would, so if you had cloned that container image to your own registry, then you would know who had pulled that container and some of that deployment information.
Does that answer your question?
And then IBM, so again, similar to Oracle Docker. Is not sub capacity eligible virtualization. However, you can actually use it in conjunction with sub capacity. So what that means is that if you were running again, docker directly on a host, and then you have your containers here, you’d have to license the entire host, however, If you actually had Docker installed on a virtual machine, which earlier I said this was a silly thing to do, but in this case you can it makes sense because you can license a subset, then you only have to license the actual use cores in this instance that it, that has access to and is using because it’s, you’re actually licensing the VM itself more so than the entire host.
So again, while you cannot run Docker directly on a host without licensing the entire host, you can run on a sub cap eligible virtualization technology. And in that case, then you can just license that sub cap licensing. To do that, you would have to have either a big fix or ILMT agent installed on the individual VM.
And again, ILMT is able to discover IBM software as on Docker as of 9.2 0.5, which came out in December of 2017 for Oracle. If this is a question for Oracle, if the infrastructure supporting Docker environment is on approved heart petition technology, do you still license the entire physical host or cluster? No. So in that case, you would only have to license the, that cap zone, that hard infrastructure.
However, the only way Oracle said that’s allowed is by the Oracle Solaris 10 containers in that CAPTA zone.
So that’s the, the content that I had. Any questions from that we didn’t go over?
Okay. We’ll give everyone a minute to type if needed. There are some upcoming events. The we’ll put out actually a version of this presentation as a blog. And then also this webinar on demand will be released tomorrow so you can view it again. We also have some training offerings available and some of our new training offerings are.
Around mainframes as well as SAM and ITAM certification trainings, including a lot of the IAITAM offerings such as the Champ and CSAM. So that’s something that you’re looking for at Anglepoint, is able to provide. And actually, I don’t see any questions at this time. So I’ll stick around for another minute or two if there are any questions come up.
But otherwise appreciate everyone who, who joined. And again, feel free to reach out afterwards if you had other questions or comments that you wanted to discuss. We’d be happy to. So thank you all. Appreciate your time.
Yeah, awesome. Thank you, Trent, for that great presentation. Like Trent just said, feel free to connect with Trent or Ben and don’t hesitate to contact us on our website or email us.
But that is it from us and it looks like. There we are done with the questions. So thank you for attending today and we will see you guys next time.