Our Windows Introscope Agent has a very high CPU usage.
It was stated that the peaks did not influence the 5% CPU average as stated in the documentation.
It was explained that the Perfmon Agent was collecting 28 metrics for each of the 40 worker process. This is causing the peak .
Also clarified was that the metrics affected only one of the 4 CPU cores on the VMs. This was shown by using the per CPU core monitoring graphs that Windows provide.
The system collects 28 metrics which it tries to collect at the same time to deliver it to the back-end in one go. However, it does this for 40+ applications. If we take into account all the required Perfmon API calls that need to be done, these cause a short cpu spike to collect it on one cpu core then go back to normal. Note that the spike does not last long and the other 3 cpu cores are not affected. The average CPU usage over time remains <= 5%.
This is a normal behavior for every running program. If it can be executed very fast, Perfmon will barely see the cpu usage spike. If a process is collecting information that is not in cache/memory (metrics are not cached) because we want the "current" values and for this we need to collect these from the appropriate places each and every time (Perfmon API) it cannot take advantage of the cache architecture in place.
If we cannot complete the execution within 1 second (usual Perfmon data collection interval), Perfmon will see the higher CPU usage required to perform the task (what you define as peak).
What also influences the cpu peak/load on a system, is the amount of processes running and overall system load and cpu usage. The CPU "load" depends on the amount of "applications" waiting to get the CPUs attention, while the usage is how much cpu cycles are actually required to do the amount of asked work.
Depending on the available RAM (paging required or not?), actively and currently used interrupts and I/O, available CPU frequency, used generation of motherboard, RAM and CPU, the speed at which these tasks can be performed will be different. The longer a system needs to execute a task, the more likely the Perfmon will report this process's higher CPU usage.
On top of that, your server runs in a virtual machine. The host this virtual machine runs on can also influence all the processes inside the VM's running on it (they become slow, sluggish) if the host is oversubscribed and all CPU and or I/O resources go to another VM (because there is a backup running or similar causing high CPU usage due to compression, and excessive I/O to read and write down the data) or simply a backup job running on the EX host itself. The problem is that inside the VM's you will not see that these resources are not available. It will just take longer to process the jobs and Perfmon will report higher CPU cycles for more applications as executing the same job will take longer in comparison to on an unloaded and/or faster host.
So, to answer your question: Yes, the machine will use resources to process the data. No, the Perfmon collector usage does not use excessive resources. It uses just what is required to do the job, and the average CPU usage remains below 5%. Peaks may show up depending on the overall load of the host system and possible speed of execution of the data collection process.