Get complimentary access to the latest Gartner® SAM & FinOps Research report.

Resources

The Artificial Intelligence (AI) Revolution & IT Asset Management (ITAM)

We are currently living in what can only be described as the first artificial intelligence (AI) revolution. It may be hard to recognize such changes when you are living through them day-to-day but make no mistake: AI is having the largest impact on the world since the mobile revolution that put an always-connected device into everyone’s hand and pocket.

That may sound like a lofty proclamation, but AI is, or soon will be, impacting nearly every aspect of our lives. Self-driving vehicles are used all around us and getting better every day. AI-enabled assistants like Siri, Alexa, Cortana, AliGenie, and Google Assistant are ubiquitous. You can have AI services transcribe your video conference in real time, including translation into over 100 languages. You can enable real-time AI-assisted noise suppression so people won’t hear you snacking on chips or your dog barking in the background during a call. Apple’s FaceTime uses an AI-based filter to make it look like the person you are FaceTiming is looking at you instead of their screen. You can even employ AI services to write posts and articles like this that are indistinguishable from human-written ones. “Wait, are you saying an AI bot wrote this article?” It might have…and you would never know the difference.

This First Great AI Revolution is being driven in part, by a new low barrier of entry for utilizing and developing AI solutions. Tools like TensorFlow, Keras, and PyTorch allow developers to train models and create AI-based applications without being a specialized data scientist or an expert machine learning engineer. And while most AI services run in the cloud, hardware too is starting to implement AI accelerators—silicon dedicated to boosting performance of AI training and inference—such as NVIDIA’s Tensor Cores or Apple’s Neural Engine chip. These and other factors have allowed AI performance to increase at a truly unprecedented rate over the last 9 years. As noted by researchers in Stanford’s 2019 AI Index Report, AI compute power has significantly outpaced Moore’s Law, doubling every 3.4 months since 2012. Let the reality of that statement sink in. To put a finer point on it, AI performance has been accelerating 25x faster (1054% CAGR vs 41% CAGR) than the micronization of integrated circuits at the heart of the modern digital revolution. AI stands to dramatically enhance nearly every aspect of technology as we know it, or rather knew it. Companies, organizations, industries, even countries that do not adapt to make use of AI innovation will simply be left behind.

Certainly, the ITAM industry is no exception. Afterall, ITAM as a discipline exists largely because it is difficult to track & control IT assets, monitor their use, and optimize their cost throughout their lifecycle. The rate of technological change (e.g., ephemeral infrastructure, XaaS, BYOL, etc.) has always presented effective ITAM with an ever-evolving set of challenges. However, some of these as well as some of the more traditional issues that have plagued effective ITAM, are problems that AI is particularly well suited to help solve.

What Exactly is AI Again?

Artificial intelligence is a broad term used to describe what a machine employs to learn and perform tasks without explicitly being told how to perform those tasks. The term machine learning (ML) is sometimes used synonymously although it is technically a subset of AI, because it is the primary method used today to enable computer algorithms to improve automatically through experience. The field of ML has several branches, but what we will be focusing on is deep structured learning which uses a deep neural network (DNN)—an approach to cognition inspired by how the human brain works with its network of carbon-based neurons. Machine learning is broken into two parts: “training” a model based on existing data with specified results and “inference” that uses the trained model to predict a result within a new dataset.

Training
You first need to train a deep learning model by giving it large sets of structured data. Typically, the data has the end result specified so that the machine can check its results as it “learns” to solve a given problem. An artificial “neuron” is the basic unit of computation, which is a mathematical construct that receives a number of inputs and has “synapses” or connections to other neurons forming a network. Each connection has a “weight” representing its strength between neurons, and these weights change over the course of the training as the machine’s neural network recognizes patterns and optimizes based on the specified results in the training data.

Simple Neural Network vs. Deep Neural Network

    FIGURE 2: An animated illustration of training a machine learning model. The model uses a DNN to identify patterns and assign weights to training data over time to solve a problem. The data points start off in a random distribution before organizing into a distinct shape/relationship. Some of the movements may appear aberrant as the visualization of the relational data is limited to a three-dimensional view (XYZ axes), while this particular model actually has 16 dimensions. This training also takes place over several days/hours but is sped up for illustrative purposes.

    Inference

    Once the model is trained, you then submit new data to it to predict new results. This is called “inference” as the model attempts to infer an answer to the new problem based on its training by sending the input through the neural network to assess the results. Typically, layers pass analyzed data in one direction, but advanced networks influence the analysis of layers in multiple directions. The more layers and the more multidirectional the analysis flowing between those layers, the longer it takes to train a network, but the more complex inferences we can make, and therefore the more complex problems we are able to solve.

    Perhaps the most advanced DNN today is OpenAI’s GPT-3 model, which is used for language processing. GPT-3 is trained on almost 500 billion words from English language books and websites and currently has 175 billion weights. With this staggering amount of multidirectional complexity and layered depth, GPT-3 can create poetry, write articles, and develop code. As you can imagine, training an advanced DNN model like this can be very computationally expensive and those costs increase exponentially as you increase the size of the training set, the amount of multidirectional data flow, and/or the number of neurons and layers.

    ELEVATE & AI

    ELEVATE is Anglepoint’s proprietary managed services delivery platform, developed entirely in-house and applied to ITAM. It’s a cloud-based platform we use to manage work breakdown structures, distribute custom tasks & procedures, securely transmit data, and perform complex data analysis. We have always been strong believers in innovative automation as a way to stay competitive in the marketplace. Automation allows us to reduce the time to value of our services, while increasing our accuracy, consistency, and quality of results for our clients. It also allows our consultants to focus more on the finer points of ITAM like governance, policies, controls, processes, organizational change, and other complex problems, rather than on rote analysis.

    What is interesting though, is that there are tasks which are very easy for humans to perform but that are very difficult for machines. One example is optical character recognition (OCR), e.g., differentiating “1” from “l” from “|” within an image of text. The human brain is great at visual computation, in part because it has had millions of years of “training” in the form of evolutionary selection and optimization. Computers, on the other hand, have historically been very bad at OCR, which is why all the early CAPTCHA programs relied on it. (Ironically, everyone entering in CAPTCHAs was and still is creating datasets to help train ML models which can then be used to solve CAPTCHAs.)

    In the field of ITAM, one of these “easy for humans but hard for computers” problems has been distinguishing human names from service accounts in datasets such as from Active Directory. This is a common analysis needed to ensure an organization isn’t assigning Software as a Service (SaaS) subscriptions and Client Access Licenses (CALs), etc. to Robotic Process Automation (RPA) and service accounts. It is also a very tedious and manual task. To try and automate it, we had initially created some sophisticated rule sets to help identify obvious offenders, but even those rules would produce significant false positives, especially as we have a global client base with offices all over the world. Even a simple rule such as, “IF Name contains ‘admin’ or ‘user’, THEN…”, falls apart when you have transliterated German, Thai, Mandarin, and Hindi names for example.

    Unsatisfied with the accuracy of this rule-set approach, we then tried using a classic statistical ML approach that relied on regression analysis to solve the problem. We ended up with 97% accuracy which might sound pretty good. But when you have 100,000 users as some of our clients do, that means that on average 3,000 will still be wrong. We went back to the drawing board and implemented a DNN model which would be able to look at far more features & patterns, and consume data at a scale & complexity that would be much harder for humans to do. After training our model, we ended up with 16 layers and 10,000 weights. Even though training takes a significant amount of compute power and time, it would be nearly impossible to manually create a set of rules to do the same. The model initially had an accuracy of 99.5%. However, because we constructed it as a DNN within our ELEVATE platform, our consultants are able to manually check the results in ELEVATE that the model has a low confidence in, fix any problems, and feed the results automatically back into retraining the model. As the model continuously improves over time, we have since increased the DNN’s ability to accurately categorize user accounts to 99.7% of the time.

      FIGURE 3: ELEVATE’s DNN for user account classification. An animated illustration of a three-dimensional slice of an early version of this classification model which shows the relative relationships of various strings. The system account “admin” shows up on one side with its like counterparts, while the account with a human name “matt” shows up on the opposite side with its own like counterparts.

      This human/system account classification is just one problem—there are many others that deep learning is well suited to help solve. For example, we have also applied DNN models to software normalization. For Microsoft, we have over 60,000 unique installation strings for discretely licensable products which traditionally have been normalized either manually or through logical rule sets. For IBM, the problem is even more complex, with over 100,000 for a single product. Using DNN, instead of lesser, more traditional techniques, we are able to quickly normalize the names of discretely licensable products with an extremely high degree of accuracy, and auto retrain the model for continuous improvement. As most ITAM/SAM managers know, IBM software also has the additional problem of product bundling, in which a DB2 license for example, can be bundled with an overwhelming number of other products, which don’t even have to be installed on the same machine. The IBM License Metric Tool (ILMT) uses traditional mapping techniques that rely on calculating simplistic confidence intervals, which leaves the user at a big loss when trying to figure out which products to appropriately bundle together. Because ILMT and bundling are used to determine license consumption, getting it wrong even a small percentage of the time often results in significant underlicensing of or overspending on IBM software. Applying a series of complex DNNs, as well as other proprietary innovations to this persistent problem, Anglepoint is able to accurately perform IBM bundling quickly and efficiently to the benefit of our clients.

      The Future of ITAM

      Effectively managing IT assets is getting harder, not easier. While this might sound discouraging, it is the result of some incredible developments that have made IT infrastructure deployment and configuration far more responsive, dynamic, and scalable. Significant recent developments in DevOps, particularly in the areas of IT configuration and management have allowed for increased innovation with less time being spent on maintenance. Containers make dynamic scaling of applications and multi-cloud deployment easier than ever before. Software-defined networking (SDN) and storage (SDS) allow for centralized data delivery and vastly easier management. Internet of Things (IoT) is poised to revolutionize manufacturing and supply chains, with every piece of equipment in the system becoming a networked device that sends and receives telemetry data. Such innovations and changes not only make ITAM more challenging, but they also increase the need for a new level of dynamic and real-time solutions.

      The trend toward everything “as a Service” (XaaS) has also sped up as the maturity and flexibility of such service offerings has increased. Already, cloud services are being billed by the minute or even by the second. Geolocated services can be provisioned and de-provisioned by the thousands within seconds. Traditional ITSM systems & processes, and quarterly baselines & license positions simply won’t cut it in this new world of XaaS and hyper-converged infrastructure (HCI). By-the-minute billing necessitates by-the-minute reporting and the next-gen processing engines to accurately support them. IT asset managers must be able to offer real-time insights to real-time decision makers if we are to remain relevant to the organizations we serve. And the only way that will be possible is with AI-driven automation.

      As an ITAM industry, we are still in the early stages of harnessing the power of this First Great AI Revolution, but we must demand from ourselves the higher level of excellence that AI makes possible. We must leverage AI to create the solutions necessary to monitor, report, reconcile, secure, and provide actionable asset intelligence in real time. At Anglepoint, we’ve already tackled a host of problems that previously seemed impossible to solve through automation, and in the full light of this new era of accelerated AI for the masses, we’re excited for what the future holds.

      Let’s start a conversation.