Resources

Security & ITAM in the Public Sector

Listen to “Security & ITAM in the Public Sector" on Spreaker.

Host: Philippe de Raet, VP of Business Development at Anglepoint

Speaker: Blake DeShaw, Security Manager at Anglepoint

As the world becomes more connected, digital security is paramount—not simply for maintaining positive customer relations but for global security and safety. This is one of, if not the most important parts of a robust ITAM & Security strategy. However, the task has never been more monumental. Not only do your organization’s tens of thousands of networked devices need to be tracked, but even employees’ personal devices, such as their phones and personal computers, need to be taken into consideration.

Protecting information from cyber-attacks is so crucial that extensive Executive Orders are in place to guide what companies are accountable for. Executive Order 14028: Improving the Nation’s Cybersecurity was a direct result of the Solar Winds incident, which was released in 2021. Based on this EO, Memorandum M2218: Enhancing the Security of the Software Supply Chain through Secure Software Development Practices was also recently released. Listen in as Blake helps dive into what this means for your organization.

We discuss:

  • 3 major breach case studies
  • The triggers you need to keep an eye on
  • Importance of accurate data, inventories, and usage
  • How to reduce and mitigate security exposures

If you’re interested in learning more about Blake, connect with him on LinkedIn.

Listen in on our latest podcasts by checking out the ITAM Executive.

Dig into more insights from ITAM executives by subscribing on Apple PodcastsSpotify, or wherever you listen to podcasts.

Webinar/Podcast Transcript

Security & ITAM in the Public Sector – Transcript

The following is an excerpt from an Anglepoint webinar. To watch the entire webinar, please visit angle point.com/webinar. You’re listening to the ITAM Executive, a podcast for ITAM leaders and practitioners. Make sure to hit subscribe in your favorite podcast player and give us a rating. In each episode, we invite seasoned leaders to share their tips on how to define your strategy, promote the value of ITAM in your organization, and align your program with the latest IT trends and industry standards.

Let’s dig in.

Philippe de Raet: We are excited today to share with you some of our capabilities as we celebrate cybersecurity awareness Month. With me, I’ve got Blake DeShaw. Blake, we’ll go over a few initiatives that have come out of an executive order from last year and some of those guidelines resulting from it. We, at Anglepoint are a software asset management provider.

We provide consulting services around licenses, but with a heavy impact on security to ensure that your estate is safe as it relates to your licenses. With no further ado, I’ll hand it over to Blake.

Blake DeShaw: Awesome. Thank you sir. And thank you everyone for joining today. And so for the agenda, we will get to the federal requirements, but we’ll give some background before that.

So we’ll dive into three cybersecurity attacks and how they relate to the ITAM world and then that, therefore, we’ll dive into IP asset management and how that can work with security to a better protect organization. Government and private institutions, and we’ll dive into the overview of the federal requirements and really going into what are the action steps.

It is a 90 day memorandum before the first date has to completed, and so I thinking of need to know what comes next, and so we’ll dive into as we. So first of all, there’s been several cyber attacks. They’re always happening probably right now as I speak, but we’re going to focus on three and really why they happened, the damage they caused, and how they could have been prevented.

So the three cybersecurity attacks that we’re gonna look at today is one is the Equifax data breach, which happened back in 2017. We’ll dive into SolarWind as well as the Log four J incident, which happened in 2020 and 2021, respectively. The Equifax timeline, so this was around Apache and a vulnerability that was identified by Apache on February 14th, 2017.

Really, what the vulnerability was that Ramon command remote command could be par via the HTTP header. They published a fix on March six, and so then there was a window to pat. Ideally, a company would begin patching immediately. They would know all their IP assets and be able to start identifying what needs to be patched and fixed accordingly.

The attack actually began on Equifax May 14th. So there’s approximately two months where they had a window where they could have patched, and that attack lasted for approximately two and a half months until Equifax finally, I was able to identify and that was vulnerable, and patch accordingly on the right hand.

Almost 150 million customer data sets were compromised. Over a billion dollars in costs. Unfortunately, the CIO, CSO, and CEO, all were forced to retire, and of course, no company or their employees want to be part of such an attack really quickly. This is just how the attack happened. They found vulnerable servers and were unfortunately be able to locate those via the dispute portal servers of Equifax.

This contained PII information, personal identifiable information. The attackers collected the logging. Credentials, excuse me, to additional servers and databases and remain hidden while maintaining a presence, and we’re able to capture almost those 150 million customer records for being noticed and therefore patched.

What is interesting and why I think this is relevant to today’s discussion, if not just cybersecurity in general, is that the US Senate was involved of course, so they did an investigation. And the reasons why Equifax face its vulnerability was a lack of a comprehensive information technology, IP assets inventory.

Now then that led to, of course, the inability to meet management. Because what were you fixing? Second of all, they scanned their servers and they found that, of course there was no threat. Once again, they were probably scanning servers that were up to date and unfortunately not scanning those servers that were vulnerable when we were looking at their largest competitors.

They were able to react quickly and effectively due to having a functional IT asset management and software asset management program.

Philippe de Raet: Yeah. And Blake, if I can just jump in real quickly on that note, having myself come from Experian, where I spent almost six years, and a couple years at TransUnion, I can assure you that these were critical to put in these systems in place, these processes in place to mitigate against such vulnerability.

Clearly, Equifax paid a heavy price, but having been on that team at Experian to ensure that these were in place was critical. And so we here at Anglepoint certainly are there to assist any agency, any entity with those critical types of measures to be put in place. Back to you, Blake.

Blake DeShaw: Thank you so much.

Really diving into that comparison is just that TransUnion experience. Also, we’re faced with the same vulnerability and do their ITAM program. We’re able to at least mitigate and not become the Equifax in this situation. Diving in now is the solar wind software supply chain attack. Briefly, this affected 18,000 of its customers.

Included 425 of the US Fortune 500, top 10 US Telco codes, all five branches of the US military, top five US accounting firms, various government departments, and even the White House. What happened here was that the threat actor accessed the platform and inserted malicious code into the platform.

SolarWinds sent out an update and therefore infected all of it. What is important and why I included this example for this webinar is to point to the obvious ways an appropriate ITAM program can help prevent. Because knowing what has been affected is the first step in reacting, as well as having all vendors who have your data in identified and appropriately managed.

And the third reason, and which will lead to our later topic, is that SolarWinds Orion was widely used in the federal government to monitor network activity. And so really this is what prompted the federal response and is a key driver in the executive order, and therefore the memorandum that we will discuss later.

And the third cybersecurity pack is the log for J Vulnerability, the most serious security breach ever. Millions of attempts were made to exploit the software, and then obviously the cybersecurity and infrastructure security agency was instantly involved, described as the single biggest, most critical vulnerability of the last decade.

That is because blog is an open source logging tool for Java programming language, so it’s open source, which means it is much cheaper than other alternatives. And second, it works very well. The vulnerability was published on December 9th, 2021, and allowed hackers to tell the logging utilities to fetch certain information.

And unfortunately return it right back to the hacker. This was because of when developing the, there was good intentions, was to make life easier for developers. But unfortunately, and of course, knowing the cybersecurity world, it was weaponized, if I remember correctly, when this came out, they had patched it and there was still vulnerability bound, so they had to have another patch come out.

And this just goes right back to the conversation of ITAM and not just keeping server A, server B, but also identifying the versions and the patch numbers. It’s really critical when a vulnerability like this is discovered and to be able to react appropriately, both SolarWinds and Log four J. My team here at Anglepoint, we lost hours of sleep dealing with both of these vulnerabilities as we were helping our clients reach out to all of their critical software vendors to understand, A, are you using Log four J or SolarWinds?

And B, have you started fixing it? It may seem like an easy task. You’re just emailing and having that conversation. However, All of these vendors are now also checking all of their critical vendors. And maybe we would give you the answer that we may, we think we’re not using it, but let us get back to you.

And we saw that cycle through, it took probably three months from some of our critical vendors to really get a kind of certain concrete yet or no answer, which in the world of cybersecurity attacks is exactly the opposite of what you wanna have to deal with. Moving on from cybersecurity attacks, we can dive into.

IT asset management. I’m not going to get into teach everyone how to do IT asset management, but what I’m going to focus on is it asset management and integrating it a with security into an overall IT strategy. So if we look at this from a very high level and start to work into the details, some organizations make the mistake of putting it asset management into its own silo and separating itself from other business functions.

And while this is siloed, these other business functions are typically doing the similar work. But all of these different cog or processes are dependent on this information. And if siloed incorrectly or unfortunately maybe relying on their own information gathering, which is leading to inconsistency across the board.

In the case of a software development, you should have a software bill of materials for any developed project that ITAM is aware of, and obviously ITAM should feed back into change management. So really ITAM management, if deployed correctly, should be at the center and one of the most important pieces overall IT strategy to function properly and effectively.

What do ITAMand IT security look like separately? So on the left hand side here, we see what a typical it ITAM team would work with. So IT asset inventory, what are our assets, and of course that can be software and hardware licensing, understanding versioning, where are the locations of these servers.

Of course, at times of renewal you’re dealing with contracts and at the end of the day, who is responsible for these assets? We have security policies and procedures, which usually define how assets should be inventoried and owned, and also in some cases can govern license procurement. Risk management needs to know where outdated servers are located and the location of hardware assets around the world as perhaps global conflicts intensify in the events of a disaster.

Now all of the component belonging becomes super relevant. Really the risks of these being separated, as well as being performed by two different teams is limited visibility into the enterprise as a whole obvious security gap team working in silos. And really what I, what kills me is that it’s usually redundant efforts.

These teams are doing similar things, but because of their siloed creates a competitive environment and rather than working together, deliberately work apart and then really grow those projects separately. Going back. So with those two separate buckets, what we would hope is that they combine, and this is all can be fed by IT asset management information.

So what is that? An UpToDate IT asset inventory, and how can that help all of these processes? It’s critical for every process in a business to function properly by being able to take action on the most UpToDate information available to them. Also to avoid the redundancy I was discussing in accurate ITAM can help save time and spent on doing the same work and actually making the data work for themselves for their own purposes rather than all that work.

Just getting that data, it is a nightmare trying to just get that IT asset inventory, especially when you can’t work with a team delegated to focusing on ITAM, I think most of these are obvious examples, but if we look at financial management, procurement, of course, they’re the ones buying the licenses.

Instead of beginning the pipeline, they should be using it. Asset inventory for guidance doesn’t make sense for them to be buying new SAT licenses when maybe they already have an over abundance. Typically what I see in the industry is that all these teams function separately, and at the end of the day, maybe in a review of the ITAM they wonder why they’re, the assets are so all over the place.

 Unupdated, there’s. More than enough licenses is because they’re not feeding the item information and an updated I asset inventory to the rest of the process. Just real quickly, the security and BCP team, of course, they need to know where all the I asset inventory fit and they for sure do that work by themselves.

So why not make it a source of truth? And then of course IT architecture and configuration management need an up-to-date list to do their job effectively. So some ITAM and cybersecurity challenges that both face is, but the first is an asset inventory. So having all assets being identified is a task enough in hand, but it’s not a one time thing.

This is a, this is something that has to be breathing and living constantly. It should be up being updated every day, every week, every hour if necessary. And so it’s really important that not only is it drawn up and created, but also maintained. Going to the next one is asset ownership. You can have these assets identified, but without an owner in terms of these disaster recovery events, or even at billing times, if you don’t know who owns these assets, the boulder starts to roll down the hill.

Who’s responsible for these? Once again, that is not something with employee turnover is something that’s easy to do, and so having that asset ownership defined and working correctly is in itself a very difficult task at hand. The rest of these are very basic use security or basic principles that are very difficult to execute, acceptable use of this asset, defining how these assets should be used.

Of course, this only is important if you have the correct information, classification of information, asset handling, and then removable media. And so if we had to create a list of critical processes, security critical processes for ITAM, this is what it would look like. Tracking hardware and software.

Tracking the associated patches and upgrades. Data mapping-What data is involved with these hardware and softwares, the return of an asset. So once again, asset ownership. We have return of asset. Removal of asset is now we have an up to date asset list. Our assets also being removed from that. You can have a up-to-date asset list in terms of new assets, but are assets actually being updated and on the appropriate timeline?

Network and data flow diagram, this is one of the most key things to understanding the data flow of a company as whole, which then really you can get into and should be specific down to each piece of hardware and software. And then magnified from a 10,000 foot view is how does this make our software hardware work appropriately.

Tracking of removable storage and then tracking of bring your own devices. Number seven, here, B Y O D. Of course, you can have the greatest asset inventory in the world, but if B Y O D policy is not defined, now you have everyone bringing their own devices from home and now you have entire problem that’s you didn’t know was there in the first place.

Security can drive the enforcement and compliance of these processes while ITAM drives really the data behind each of the critical information provided to each of these. Really quickly, just to tie this back in with the cybersecurity attacks that we discussed earlier, is if the security night time are functioning properly, this is what an excellent patch management process would look like.

Vulnerability is identified that. You can test the patch and then you know where to deploy your patch. With examples we discussed, that is where that huge gap came from. Maybe they were able to identify what they thought were the correct assets. They weren’t and they were missing assets. And then when they went to deploy the patch, of course they’re getting not full coverage.

And in these attacks, that is exactly what. The exploitive hacker or looking is the hopes of 85% being passed and they’ll attack the remaining 10 to 15%. And then once that patch is deployed, it’s overall being able to scan and ensure that what you thought you sent out in terms of a path has been deployed and you are now effectively strengthened from any attacks coming your way.

Unfortunately, time is the only thing that matters, and we have a choice to reduce and minimize risk, but we are never completely exempt from it, and both ITAM and security practices should really be based on that fact. So that covers ITAM and now we can dive into the excitement, exciting world of federal requirement and executive orders.

So we are going to discuss an executive order here. We’ll give the high overview on what is going to be required by agencies and some other departments as a result of this memorandum. So the executive order, 1428, titled Improving The NA Nation Cybersecurity was released in 2021, and it focuses on the security and integrity of the software supply chain and as a direct result of the solar wind incident and the federal response that led to now based on this executive order.

Last month, September 14th, the government released the memorandum M 2218, which is called Enhancing the security of the software Supply Chain. Secure software development practices. That is a mouthful. But what it means is that they’re going to require all federal government agencies to maintain a software asset inventory and then reach out to those vendors to ensure that they have a secure software development practice and an initiative going on within each software vendor.

And that’s something that we’ll dive into further. The memorandum uses two different documents from the National Institute of Standards and Technology, otherwise knowing known one as NIST. You can see NIST guidance in the middle there. This actually is compiled of two NIST documents, one being the secure software development framework, 802 18, as well as the NIST software supply chain security guidance.

So when they’re looking at controls and what’s really guiding, this is two N documents that will be commonly referred to as agencies. Then the next step is that this memorandum was issued to the Office of Management and Budget, the O M B. And then they, that set out requirements for government agencies, the Office of Management and budget themselves the cybersecurity and infrastructure security agency, and then even NIST.

I will dive into the specific of the agency requirements when we really get into what do we do next, but just to briefly talk, what are the other shared responsibilities laid out in this memorandum? So the first is the Office of Management and Budget. So within 90 days, they will allow for waivers or extensions.

So if a government agency is having difficulty kind of creating their initial inventory, they’re able to then, Hidden extension. I believe it’s a 90 day extension, but that is still, I believe, not completely concrete. Obviously, they still have 90 days to do that, and this is all 90 days from the memorandum being signed September 14th, 2022.

The OMB also within 180 days must establish requirements for a centralized repository for software at adaptation. So the agencies will reach out to all of their software vendors and they will have them basically fill out and then provide evidence of a self. Saying that they’re meeting certain controls and processes, and then they will be required to upload those to a government repository.

So we have the OMB and both is responsible for building that repository and obviously providing the instructions on how to upload to that. So at the end of the day, if we’re looking from one, two years out, the idea is to have one general repository with all of the governments, these softwares being self attested.

Or mitigating risks that they were unable to self attest to, which we’ll dive into the cease the responsibility. I would say it’s more of a, more of an admin role, of course, but they will establish the standard self adaptation. So unfortunately that is not available today. And so within 120 days that should be released to the public or to the government agencies within one year.

They need to establish the program to have that government-wide repository that I was discussing earlier. And then the third is they will come out with a software bill of materials for federal agencies. So in some cases, especially if an agency is designed their own software, the government wants to know, and of course they’ll have to report on what does that software consist of.

Maybe they were using Log four J, maybe they’re using different components. That is what a software bill of materials is. And so Thea will provide additional guidance on that. And then the final missed responsibility is just that they will continue to update the, their guidance as appropriate. And here is the overview of the agency requirement.

To give a quick overview, so stage one is to create a software inventory and also to determine criticality of fed software. Stages two and three deal with creating the process and training to be able to validate these self-attestation letters received from the software vendors. Stage four and five is a collection of these self-attestation letters for both critical software at stage four and then by stage five for all software.

Object to these requirements. And the next question you should be asking me, what does critical software mean? So in another executive order, critical software was defined. You can see, or they have. Several bullet points, but really any software with elevated privileges, and that is performing a function considered critical to trust and or the function of the agency.

Of course that can be interpreted in various ways, but it’s really important to really focus on the elevated privileges. Before we dive into the specifics, it’s important to note that this memorandum applies to all forms of software. So those that sas, that’s also software that is integral to devices are hardware components, standalone software, and even government created software that may rely on other software to function.

Now we’ll go into each agency requirement is what do we do next and how do we meet the new government regulation? So let’s look at stage one. This is the beginning. I do have the date brought in because that is this year. So December 13th is approximately 90 days from the signing, September 14th, and the requirement is agencies shall inventory all software subject to the requirements of this memorandum with a separate inventory for critical software.

So first is identifying all software being used across the government agency and creating an inventory that is one task here but is not an easy task at all. Of course, depending on. How their inventory has been maintained in the past. They may be starting off fresh and it’s it is a challenge as Anglepoint can speak to it.

And we’ve been in the ITAM industry for a while. If it was easier, everyone would already have this inventory ready to go. The second part of this is that associated criticality. What are we super dependent on? What has that privilege? So this is going to take not only just the list and figuring out all the software deployed, but it’s also understanding how it’s.

Being utilized within each government agent. Point number two is also identifying all software used in government created or agency created software and creating an inventory based on that as well. And then the third here is really that software bill of materials that I was speaking about earlier, which may be required based on criticality or for that government created software.

Now that we’re past, within 90 days, we’re looking at stage two and stage three here, which is within 120 days. Agency CIOs shall develop a consistent process to communicate relevant requirements in this memorandum to vendors and ensure edit patient letters are collected in one central agency system.

Stage three within 180 days is that CIOs Agency CIOs shall assess training needs and develop training plans for the review and validation of software adaptation and artifacts. So, what does this mean exactly? Now that we have this list of software inventory, how are we going to communicate? Do we have.

All the contact addresses for all of these vendors. I know from experience that typically when you’re dealing with software vendors, I have the accounts payable email and contact information, but if you reach out for something security related, they instantly send you to someone else if you’re lucky to get that sort of response.

Typically, anything outside of pos and money will just go straight to their junk file. So having the up-to-date communication emails, phones, is going to be a huge task. Stage two deals with creating that overall plan on how to communicate the requirements. Each vendor and ysa associated products here is, if we take Google for example, it’s not just Google as one vendor.

We want to look at Google Prep as a vendor, but what are we using? Google, Gmail. We’re using workspace. We’re using their cloud system. So it’s interesting how it isn’t just one vendor, it is also the product that you’re using, which once we’re talking about Microsoft and some of these big things, of course there’s products within the same upper vendor space.

So that also will be an interesting at hand. So once the plan has been communicated, okay, so this is how we’re going reach out to these vendors and of course make sure that these letters we’re collecting end up in this centralized system is how is our team. Validating. So the creation of training plans for reviewing and validating software adaptation.

The software adaptation is not available yet, as we can see. I believe it’s 120 days, so it’s probably earlier next year. But of course, this will include questions and perhaps evidence requests around understanding the vendor’s software development life cycle. Do they deploy asset management? Very, probably high level control, but we’ll be able to determine do they have a security program in place?

What is the maturity of their program, and really what is the maturity of their software development lifecycle? And so these government agencies, do people know how to understand this information? And really, Whatever the software at station is now we need to plug in the information that the vendor is sending over, whether it’s publicly available information, perhaps they send over their recent Autumn is to able to get that information into the self attestation letter.

So this stage three really deals with what assessing can their staff do it and then developing the training so that they can review and validate these software at adaptation and artifacts. And that brings us to stages four and five here. So here we’re getting, it’s all within one year, but from my consulting lifestyle, at least, we have a year, right?

So within 270 days, you shall collect at a station letters for critical software subject to the requirements of this memorandum. And then finally, stage five. At the one year, September 14th, 2023, approximately agencies shall collect at adaptation letters for all software subject the requirements of this membrane.

So the steps are easy here, right? The first is outreach and collection of all critical vendor at, so this ensures that contact information and contact has been established. This ensures that their at station is confirmed and is going to be approved, and then storing that into that centralized repository kind.

The second step is then the outreach and collection of all vendor at, of course. Hopefully by the time critical vendors have been reached out to. This process should at least be able to function a little bit better. And when you’re doing all vendors, I’m guessing a list of critical vendors will account for 10 to 15% of your overall inventory with the remaining being in this non-critical grading style.

The interesting part of this, which isn’t really defined so far in this memorandum, but I’m sure will come out, is that with this, some vendors will not be able to add a test. And so if they say that they’re not able to test these certain practices laid out in this form, Then the vendor is going to have to identify which practices they cannot attest and start documenting what we consider in security.

The classic mitigation of risk plan here, I believe they call it. Yeah. The plan of action and milestone. And really what this is going to be developed for is to make sure one, these risks that are identified are going to be mitigated sometime in the future. And of course, That will involve reaching out at certain points.

Of course, I’m sure there will be different timelines, remediation timelines, depending on what cannot be attested to, and will probably kick off a whole other, probably stage six, seven, and eight to, to this memorandum. I’ll pass it over.

Philippe de Raet: Just final words. Thank you, Blake. It’s been great. And just to close the loop on all these requirements that can be overwhelming.

Anglepoint has been working very closely with NIST for years now, and in conjunction with CSUN and others, we’re really at the forefront of all these standards and requirements. And have put together an offering to ensure that agencies can meet these milestones. So please let us know if you need any help for some of these or all of these.

We’ve got the appropriate vehicles to ease the procurement fulfillment, and please feel free to contact us for any questions. My name is Philippe. I head of public sector here at Anglepoint. We look forward to working with you on these initiatives. Blake, thank you again for your time.

Blake DeShaw: Thank you everyone.

You’ve been listening to the ITAM Executive brought to you by Anglepoint. Make sure to hit subscribe in your favorite podcast player and give us a rating. Thanks for being part of the ITAM community. Until next time.

Related Resources

Let’s start a conversation.