SAM Tool - Agent or interface to existing solutions - Pros + Cons

We are planning to implement a Software Asset Management Tool…(yes, we are still working with spreadsheets to manage our licenses :-))…our hardliners insist to configure interfaces to existing tools and databases. Additional workloads should be avoided.
I prefer to use every available out-of-the-box feature of the SAM-Tool to gather complete and accurate information, as much as possible. Every tool between SAM and source contains the risk of distorting the information. Outcomes are not reliable.
What is your oppinion? Please, tell me about your experiences.
Thanks!

I think I can offer some options and insight, having been an ITAM consultant for over 17 years and led many such projects as you describe.

The simplest and perhaps least useful answer is “it depends” - on the SAM tool, on other tools in your environment, the specifics of your hardware and software assets, and perhaps even your organization’s tolerance for inefficiency and inaccuracy.

You seem to be describing two ends of the spectrum of an implementation strategy and I have seen both work on occasion, but for most organizations the solution seems to be something in the middle.

For example, most organizations find the data coming from existing tools such as SCCM and JAMF to be consistent and accurate, so why deploy a new one in that environment. However, perhaps Linux/Unix is not so well understood so perhaps that is an are where a new agent may add value.

Another common scenario is when there is good data in general but perhaps specific publishers or products lack detail - databases, for example. Perhaps your current tools do not show all options and management packs for Oracle DB clearly, for example - deploying an agent in that environment may fill the gap?

Similarly you may have invested in a CMDB and already have clean data of your environment - do you really need to pull all the data again from separate data sources into your SAM Tool? It depends…

So I would recommend taking another look at your environment and really assessing where your existing strengths are and see if there is a “hybrid” solution that will work for your organization and offer a win-win for all involved.

There are many other knowledgeable and experienced contributors here that will offer sage advice as well, but hopefully what I have shared will be of some help to you.

2 Likes

Thank you very much! It is a great help. I didn’t consider such an approach yet. :slight_smile:

Great suggestions offered already by @kpanteli. Every organization will have multiple discovery/inventory tools. Those sources will never 100% agree with each other, and each source will serve a different purpose. Some of them can be useful data points to ingest into your SAM tool. Some of them can be used as checks and balances to make sure your SAM tool agent is where it needs to be. But not all of them can be leveraged for the calculations a SAM tool needs to do to measure license compliance.

Understanding the license types you need to track vs the capabilities of your existing deployed discovery tools vs the capabilities of your SAM tool agent will help that drive that conversation with those that don’t want to put yet another agent on end points or servers. For example, if you need to measure subcapacity/PVU or Oracle database introspection, only certain SAM tool agents can do that (and do it in a way that potentially meets contractual obligations). However, if you’re tackling SAM on desktops only, SCCM may give you sufficient data to configure and track the majority of your licenses.

An approach you might consider is doing a few tests with your hardliners. Set up some licenses to track in your SAM tool and compare the license compliance data your SAM tool can provide both before and after installation of your SAM tool agent on a handful of devices. Assuming you can show the value of how the SAM tool agent can find richer data for installed applications and provide financial risk mitigation via license tracking might be helpful to sway the “not on my server” crowd.

1 Like

Hi @The_German

Great recommendations already, but a bit of a twist from me.

Leverage existing discovery sources is ideal, but key to leverage those sources is whether you feel confident the data is credible, consistent, and reliable. Making decision based flawed data can be costly in a SAM program.

That said, software data is not standardized so all discovery sources will collect the data differently and with variation of the software signature files; so normalization and deduplication is key to making sure you are using a single record vs. multiple records of the same software.

As you start your SAM journey, make sure the data is credible and usable.

Thanks!
Lisa

2 Likes

Hi Lisa,

I much appreciate your expertise and your feedback is bar-none. Thank you.

Betty Ann

I agree with @kpanteli. and @EliseCocks For me it reall repends on the overall balance. I like to start with looking at the relationship between the actual costs of the Software v License Metrics v what Inventory I have, then use additional tools to plug the gaps. The reason I mention the costs is that it really drives the success of the implementation. Taking some time analysing what your Metrics require you to collect in terms of inventory is key, before you make any desicion, can the tool collect that data point, ask your vendors specifically.

We prefer do not use agents, our tools are all agentless.they can put excess loads on the system. For example, you put an agent on a system and it takes say only 0.5% resources of the system, but you may already have many agents for security, AV, threat detection, updates etc. So in real terms we are seeing systems use 1-5% resources for management and agents running. And this has a direct impact in License Terms, all those resources are running at the VM/Host and Physical level, therefore for most Software vendors licensing you are paying for that Overhead.

Like Kevin says SCCM won’t really help you with an Oracle Licensing Position. But you may want to use it to identify the targets that your tolls need to analyse further, SCCM can of course find Oracle.exe on windows.

Getting an out-of the box ELP is the holy grail of SAM, for most SAM tools there needs to be a data cleansing step between data collection and creation of ELP for some products. With improvements to processes and data collection there is a lot of work, but worth it in the long run.

I see that this discussion thread is aging. But I will jump in anyway. There are a few potential issues that have not been discussed here yet. Aside from the debates about the accuracy, there are some technical issues to consider.

There is the issue of processor overhead at the endpoints, and similarly, there is the issue of network traffic overhead. In the battle of “local client agent” versus “agentless,” over the years (and I’m talking nearly 30 years of observation), some local client agents have shown themselves to be very noisy at the endpoint, demanding a lot of processor time. This, of course, is not good. Other local client agents can operate much more efficiently with less demand and resources. These things vary a lot, and the only way to know is to measure processor demand during various operations that the client agent is conducting. This issue can exist both with a tool vendor’s dedicated client agent and with a generic agent collecting data in general.

Then there is the potential issue of network overhead. The causes behind this type of overhead are more clear. Processes that use HTTP will create a lot more noise on the network than processes that use TCP. If I remember this correctly, HTTP requires 'session-based connections to the server process. These ‘sessions’ can create an abundance of noise on the network to maintain the session connection. In contrast, TCP does not require the establishment of a client-server session to communicate with the server. Therefore, it is much quieter and much more efficient across the network. Whenever an HTTP process needs to communicate between client and server, it needs to build the session connection first. Then it sends its payload, which is typically broken up into multiple packages. There may be session maintenance between each payload package to maintain the session connection before the next payload is sent. After all payloads have been delivered, then the process tears the session down. None of this session construction, maintenance, and teardown is necessary when the process uses TCP. I’ve been using the word “process” here because this can be true either with an agent or an “agentless” system.

Over the years, I’ve heard silly arguments about agentless systems being more secure because “there is no agent at the endpoint that can do nefarious things.” To this, I say “poppycock.”

I’ve also encountered a misinformed belief from time to time that an “agentless” method is so-called “server-driven,” meaning that there is no agent at the endpoint. This is often not true. One common method that has been used over the years for so-called “agentless” processes is that when the client and server make contact, the first action is for the server process to load a temporary agent process into the local RAM of the endpoint. That temporary agent then lives there until the endpoint machine is rebooted or logged off or some such similar action.

On the topic of local machine overhead of the endpoint, this too can be random. Although it is more common when an agent is on the endpoint machine, there have been cases in years past when poorly engineered agentless processes jam the network with noise and place great demand on endpoint processors. One particular example that I encountered probably 10 years ago was the worst example I had ever seen. A vendor’s agent consuming approximately 4% of processor time during an audit data collection process was installed separately inside the container of each virtual OS that was hosted on the endpoint. I don’t remember now how many virtual OSs were running. But, suppose there were 10. 4% x 10 = 40%. The math actually wasn’t quite that simple, but you get the idea.

The last topic I’ll touch on is “visibility.” So put, an agent installed at root level has total visibility across the endpoint computer except in the cases of virtual OS’s, where the agent has to be installed within each virtual OS. Conversely, an “agentless” process’ visibility is strictly limited by the permissions of the end-user who is currently logged in. Thus, the temporary agent loaded into RAM can only see what that end-user account can see.

I am afraid to go back and read what I wrote for editing and grammatical correction. I hope I’ve made sense here. In closing, much of this may have become a part of our industry’s history as engineering improvements by product developers have fixed many of these problems.

1 Like