Troubleshooting NSE processing issues
search cancel

Troubleshooting NSE processing issues

book

Article ID: 205352

calendar_today

Updated On:

Products

IT Management Suite Client Management Suite

Issue/Introduction

The following article is based on the following use cases:

  1. There are a large amount of NSEs accumulating in the EvtQueue folder
  2. The NSEs are not processing fast enough
  3. You want to know where these NSEs are coming from

Environment

ITMS 8.x

Resolution

Overview

The Symantec Management Platform (SMP) uses Notification Server Events (NSEs)—small XML files—to communicate data from endpoints to the server. Slow or stalled NSE processing is a common cause of outdated inventory, delayed policy execution, and overall server performance degradation. 

ITMS 8.5 RU3 and 8.5 RU4 have added multiple improvements on NSE processing stability and performance. If you haven't upgraded to the most recent version of ITMS and NSE processing issues is a common problem, we recommend that you upgrade and take advantage of those improvements.

Also, in our ITMS 8.6 version, our Dev team made additional changes. Currently, to find candidates to process, "eventengine" uses stored procedures to pick the single oldest NSE per computer, then another oldest NSE for another computer, and so on. This is quite an expensive SQL query. In the new version, SMP will retrieve all NSEs for the computer and will perform process ordering at the application-level reducing impact on SQL.

As well, in our ITMS 8.8 version, our Dev team included further enhancements in order to have a better picture of what could be triggering all those incoming events. 

Background:

The most common reason for issues with NSE processing is that for some reason Client machines may be sending too many NSEs at once. Most cases could be related to very aggressive Inventory policies (sending delta or full inventory too frequently) or that many machines were not connected to the internal network for a while (because they not using Cloud-enabled Management (CEM) or some other network issue with agent connectivity causing the NSEs to accumulate in the local queue folder (under ...\program files\Altiris\Altiris Agent\Queue)) and as soon as these machines connect, they try to send everything that they were holding.

The NS logs should be your initial starting point on your troubleshooting efforts. 

Suggestions:

The following are suggestions on how you could troubleshoot most NSE processing issues.

The PerformanceSensor entries in the NS logs provide internal statistics that are essential for diagnosing the health of the NSE queues. Understanding the EventQueueDispatcher statistics is the first step in diagnosing server performance issues.

Finding the NSE Processing entries from PerformanceSensor

  1. Open the Altiris Log Viewer on the SMP Server (Start > Symantec > Altiris Log Viewer).
  2. Identify the PerformanceSensor entries. 
  3. Locate the [EventQueueDispatcher] information. This section is the most critical area for determining the health of NSE processing. Similar to this:

[EventQueueDispatcher] [running, enabled]
[612.03 k / 2.89 GB] => [32 / 96 / 6.41 m @ 46(0) t, 53.1 i/s, 2.18:03:03]
[Queues]
[0: 37.35 k / 1.45 GB, full] => [0: 16 / 48 @ 1(0,0) c / 600.95 k @ 16(0) t, 30.3 i/s, 14:02:09] [priority .. 19.07 MB]
[1: 574.68 k / 1.44 GB, full] => [1: 16 / 48 @ 16(0,34) c / 5.79 m @ 16(0) t, 22.2 i/s, 13:12:09] [fast .. 244.14 KB]
[2: 3 / 1.53 MB] => [2: 0 / 0 @ 4(0,0) c / 18.15 k @ 8(0) t, 0.5 i/s, 01:23:44] [default .. 4.77 MB]
[3: 0 / 0 B] => [3: 0 / 0 @ 1(0,0) c / 4 @ 4(0) t, 0.0 i/s, 1.11:16:13] [slow .. 19.07 MB]
[4: 0 / 0 B] => [4: 0 / 0 @ 0(0,0) c / 0 @ 2(0) t, 0.0 i/s, 2.18:03:03] [large, 19.07 MB +]
[Overall]
[threads: 32 @ 0, queue: 32 (max: 0), done: # 6.41 m (48.33 GB), speed: 0.0 i/s (0 Bps)]
succeeded: # 6.40 m (47.55 GB), 0.1 i/s (19.98 KBps), 0.0 / 0.0 / 0.3 / 0.0
failed: # 1.51 k (793.38 MB), 0.0 i/s (554.2 Bps), 0.0 / 0.0 / 0.0 / 0.0
-----------------------------------------------------------------------------------------------------
Date: 10/30 6:33:40 AM, Tick Count: 237845984 (2.18:04:05.9840000), Host Name: SMPServer, Size: 1.11 KB
Process: AeXSvc (10096), Thread ID: 45, Module: Altiris.NS.dll
Priority: 4, Source: PerformanceSensor


Interpreting EventQueueDispatcher Statistics

The [EventQueueDispatcher] section provides a snapshot of the messages waiting for processing and the engine's current speed.

    1. Review the Overall Dispatcher Line:
      • Example: [612.03 k / 2.89 GB] => [32 / 96 / 6.41 m @ 46(0) t, 53.1 i/s, 2.18:03:03]
      • The first part, [612.03 k / 2.89 GB], represents the Total Pending NSEs (count / size) waiting for the system to process.
    2. Review the [Queues] Sub-Section: This breaks down the pending NSEs by internal queue.
      • Example: [1: 135.71 k / 644.61 MB] => [1: 15 / 39 @ 16(14,38) c / 192.88 k @ 16(16) t, 43.2 i/s] [fast .. 244.14 KB]
    3. Identify High Queue Count: Focus on the first number in the queue sample: [1: **135.71 k** / 644.61 MB]
      • A count over 50,000 to 80,000 in any single queue is a strong indicator of a backlog. This forces the system to process messages from the same resource sequentially, leading to heavy database query load and overall slowdown.
      • Note: The number of items (count) is usually a more dramatic indicator of performance issues than the total size (MB).

Log Entry Analysis:

      • [EventQueueDispatcher]: This is the main NSE processing queue. The logs clearly show the "priority" and "fast" queues are marked as "full", having reached their size limits of ~1.45 GB each.
        [0: 37.35 k / 1.45 GB, full] => [0: 16 / 48 @ 1(0,0) c / 600.95 k @ 16(0) t, 30.3 i/s, 14:02:09] [priority .. 19.07 MB]
        [1: 574.68 k / 1.44 GB, full] => [1: 16 / 48 @ 16(0,34) c / 5.79 m @ 16(0) t, 22.2 i/s, 13:12:09] [fast .. 244.14 KB]
      • This line "[612.03 k / 2.89 GB] => [32 / 96 / 6.41 m @ 46(0) t, 53.1 i/s, 2.18:03:03]" means:
        [pending count / pending size] => [processing now / loaded in memory / total done] @ [total active threads(current) / max threads], speed, uptime
      • This line "[1: 574.68 k / 1.44 GB, full] => [1: 16 / 48 @ 16(0,34) c / 5.79 m @ 16(0) t, 22.2 i/s, 13:12:09] [fast .. 244.14 KB]" means:
        [queue id: pending count / pending size, status] => [queue id: processing / pending in memory @ chains(locked, podcast) / total processed @ active slots(active threads), speed, last activity] [queue name .. max file size]
      • The high number of pending items (over 600k) and the "full" status are the key indicators. The system is unable to process events as fast as they are arriving.

Verifying Incoming NSE Delivery

  1. Locate the [PostEvent] entries. This section reports statistics on the engine that receives NSEs from agents and delivers them to the EventQueueDispatcher.
    Here is an example of this type of entry:
    [PostEvent] [file system]
     succeeded: # 11.48 k (3.76 GB), 0.1 i/s (24.05 KBps), 0.0 / 0.0 / 0.4 / 0.0
     failed: # 1.48 k (308.13 MB), 18.3 i/s (3.85 MBps), 0.0 / 0.1 / 33.8 / 39.3
    -----------------------------------------------------------------------------------------------------
    Date: 10/30 6:33:40 AM, Tick Count: 237845984 (2.18:04:05.9840000), Host Name: SMPServer, Size: 410 B
    Process: AeXSvc (10096), Thread ID: 45, Module: Altiris.NS.dll
    Priority: 4, Source: PerformanceSensor

  2. Check the "failed" statistics. The failure count indicates how many NSEs were not delivered.
    • A significant failure count here can be a sign that the EventQueueDispatcher is full or failing (e.g., due to the queue count exceeding an internal core setting like EvtQueueMaxCount), causing the server to reject new NSEs:
       failed: # 1.48 k (308.13 MB), 18.3 i/s (3.85 MBps), 0.0 / 0.1 / 33.8 / 39.3

When the server rejects NSEs, the Symantec Management Agent (SMA) will attempt to resend them later, but the agents may appear disconnected or stale in the console (KB 208217).


After having a better picture of what the NS logs are saying, you can now move to troubleshooting the NSE queue issues.  

1. Understand what are those NSEs and where are coming from.

With this, try to identify what type of NSE those are: Basic Inventory, Hardware Inventory, login/logoff events, etc., as well if those are coming from certain machines.

There are two ways to identify these incoming NSEs:

Using "Event Data Analytics":


With the ITMS 8.8 Release, there is a new feature for System Health: Metadata Statistics (for Event Data Analytics). There are new reports that should help you to narrow down some patterns and what policies may need some adjustments.

Refer to "Using Event Data Analytics for understanding SMP Server performance


Using SSETools:

You can use SSETools "NSE diagnostics", which can help you to see the NSE type (displayed under Scenario Counts) and from what machines (under Resource counts). 

 

NOTE: SSE tools only analyze file events in EvtQueue. However, this is just a small fraction of all events. Smaller NSEs are kept directly in the database as inline messages.

Another tool that can be used is: Evaluating NSE data using SQL when a deeper analysis is needed.

NOTE: In some situations, where too many NSEs are received and they are being processed faster than you can review them, you can capture them and save a copy of them in a different folder.  How to Capture processed NSEs on the Notification Server.
As well, you can capture "bad" NSEs that are been ignored. See: Enable collection of bad NSEs for review

If you prefer to use the information available in the database, you can use queries to show you want may be happening:

NOTE: 8.8 has reports:Pending Events

    • Reports >Notification Server Management > Server > Event Queue
      • Processed Events Summary
      • Processed Events Timeline

--Average NSE count
declare @compcount as int = (select count (*)  from vcomputer)

select ItemName, count (ResourceGuid) / @compcount
from Evt_NS_Event_History h
where _eventTime >= GETDATE ()-1
group by ItemName
order by 2 desc

--Find what machines have the most NSEs (above 500)
select c.Name, [Source], count (*) from EventQueueEntry e

join vRM_Computer_Item  c on c.Guid = e.[Source] group by c.Name, [Source]
having count (*) > 500
order by 3 desc

--looking for machines and totals as far as NSEs go for a specific period of time:

select distinct c.Guid, c.Name, count(*) EventCount
from Evt_NS_Event_History h
join vRM_Computer_Item c on c.Guid = h.ResourceGuid
where 1 = 1
and h._eventTime between '2024-12-22 06:00:00.00' and '2024-12-23 11:00:00.00' 
group by c.Guid, c.Name
order by 3 desc

--Take one of the GUIDs from the previous query with the highest counts:

select _eventTime, ItemGuid, ItemName, ResourceName
from  Evt_NS_Event_History 
where ResourceGuid = 'Add computer GUID here' 
and _eventTime between '2024-12-22 06:00:00.00' and '2024-12-23 11:00:00.00' 
order by _eventTime

Knowing now which machines may be the biggest offenders and what type of NSEs are being sent, you should be able to narrow down why those machines are sending that many NSEs (like if there are sending Basic Inventory more than once a day, collecting Inventory too frequently, etc).

If you noticed that there are multiple machines sending a large number of NSEs and if you go to their local queue and there are too many NSEs still there (under ...\program files\Altiris\Altiris Agent\Queue), you can try to use the "FlushAgentEvents" core setting to instruct client machines to stop sending those NSEs and clear out their own queues.  See KB: Clearing queued events on endpoints.

 

 

2. Verify PauseActivities is not Enabled on the SMP server.

If you notice that there are multiple NSEs coming in but nothing seems to be processing, see if by chance the SMP services (Altiris Services service, Altiris File Reciever Service and Altiris Client Message Dispatcher service) are not stopped.
As well see if the following registry keys are set to 1  (1 = activities are Paused, 0 = processing normally):

HKEY_LOCAL_MACHINE\SOFTWARE\Altiris\eXpress\Notification Server\PauseActivities
HKEY_LOCAL_MACHINE\SOFTWARE\Altiris\eXpress\Notification Server\PausedNSMessaging

3. Enable extra verbosity on the NS logs for NSE processing.

Open NS log viewer on the SMP server, under Options>Extended verbosities
 
It should reveal a huge amount of statistics in logs for analysis:

 

4. Verify that there is not an issue with possible poor SMP or SQL Server performance.

This is a more complicated step to validate since you will need to monitor the current state of your SQL server and depend on a DBA to do some troubleshooting. 
With recent versions of the SMP (8.1 and later), the NS logs should show you a quick snapshot of what your systems are doing. Look for "PerformanceSensor" source in the NS logs. It should look like this:

[SYSTEM]
 [app cpu: 0%, ram: 301.34 MB / 1%, uptime: 57.11:50:59.1137164]
 [ns cpu: 3%, ram: 4.70 GB / 24%, uptime: 55.18:31:50.3437500]
 [sql cpu: 4%, ram: 9.15 GB / 58.5% (Available physical memory is high), cpu history %: 23 / 3 / 3 / 3 / 3 / 17 / 4 / 3 / 3]
 [ns machine: SMP-MAIN (V), ram: 19.53 GB, cpu: 1x1995Mhz, versions: 8.5.5032.0, assembly: 8.5.5032.0]
 [sql machine: sql-main (V), ram: 15.62 GB, cpu: 1x1, affinity: 2 (AUTO), version: 13.0.5026.0 / Enterprise Edition (64-bit) / SP2, trip: 320]
 [pc physical: 0, virtual: 5, managed: 5, connectivity: 5, hierarchy: 0, ps: 1, ts: 2]
 [.NET 4.0.30319.42000]
-----------------------------------------------------------------------------------------------------
Date: 12/18 11:16:37 AM, Tick Count: 672484485 (7.18:48:04.4850000), Host Name: SMP-MAIN, Size: 835 B
Process: AeXSvc (3064), Thread ID: 47, Module: AeXSVC.exe
Priority: 4, Source: PerformanceSensor

Vital information about CPU and memory usage on both of your SMP and SQL servers should be displayed. As well as Memory allocated, if there are virtual or physical servers, and other things.

Using the same "PerformanceSensor" source in the NS logs, you should be able to see queues information:

[Queues]
 [0: 0 / 0 B] => [0: 0 / 0 / 275 @ 16(0) t, 0.0 i/s, 04:18:00] [priority .. 19.07 MB]
 [1: 0 / 0 B] => [1: 0 / 0 / 4.85 k @ 16(0) t, 0.5 i/s, 02:42:10] [fast .. 244.14 KB]
 [2: 0 / 0 B] => [2: 0 / 0 / 24 @ 8(0) t, 0.0 i/s, 3.17:16:11] [default .. 4.77 MB]
 [3: 0 / 0 B] => [3: 0 / 0 / 0 @ 4(0) t, 0.0 i/s, 57.11:53:08] [slow .. 19.07 MB]
 [4: 0 / 0 B] => [4: 0 / 0 / 0 @ 2(0) t, 0.0 i/s, 57.11:53:08] [large, 19.07 MB +]
[Lifetime]
 [t=0, a=0, q=0, peak=0, done=5,146, speed=0.00, bps=0]
-----------------------------------------------------------------------------------------------------
Date: 12/18 11:20:32 AM, Tick Count: 672719407 (7.18:51:59.4070000), Size: 817 B
Process: AeXSvc (3064), Thread ID: 46, Module: AeXSVC.exe
Priority: 4, Source: PerformanceSensor

This should help you to have an idea of how busy the queues are, which queue seems to be the busiest, if they are using the default or other values for the default queue processing values, etc. The example entry above shows a normal, no busy queues, using the default core settings values. 

NOTE: We have 5 queues (represented by the queueId column in EventQueueEntry table and the Id column in EventQueue table):

      • 0 - priority queue
      • 1 - fast
      • 2 - normal
      • 3 - slow
      • 4 - large

MaxConcurrentPriorityMsgsThreadPoolSize is for the priority queue  
MaxConcurrentFastMsgsThreadPoolSize is for the fast queue  
MaxConcurrentDefaultMsgsThreadPoolSize is for the norm/default queue
MaxConcurrentSlowMsgsThreadPoolSize is for the slow queue
MaxConcurrentLargeMsgsThreadPoolSize is for the large queue

After having an understanding of the resources available and how busy the servers are:

a) We may need to reboot or restart SQL services on your SQL Server
b) May need to try Troubleshoot NSE Processing in 8.x as this provides guidance on truncating the EventQueue tables

NOTE:  Example of a bad queue processing configuration (from an ITMS 8.7.2 SMP Server having NSE processing issues):

[EventQueueDispatcher] [running, enabled]
 [76.91 k / 3.25 GB] => [300 / 462 / 57.08 k @ 500(300) t, 1.2 i/s, 07:00:24]
[Queues]
 [0: 21.25 k / 1021.10 MB] => [0: 100 / 156 @ 1(0,0) c / 1.91 k @ 100(100) t, 0.0 i/s, 00:00:25] [priority .. 20 MB]
 [1: 52.02 k / 862.45 MB] => [1: 100 / 150 @ 100(98,148) c / 52.42 k @ 100(100) t, 1.0 i/s] [fast .. 244.14 KB]
 [2: 3.64 k / 1.41 GB] => [2: 100 / 156 @ 71(50,0) c / 2.51 k @ 100(100) t, 0.2 i/s, 00:00:11] [default .. 4.77 MB]
 [3: 0 / 0 B] => [3: 0 / 0 @ 16(0,0) c / 245 @ 100(0) t, 0.0 i/s, 00:01:06] [slow .. 20 MB]
 [4: 0 / 0 B] => [4: 0 / 0 @ 1(0,0) c / 2 @ 100(0) t, 0.0 i/s, 02:08:40] [large, 20 MB +]
[Overall]
 [threads: 300 @ 300, queue: 300 (max: 301), done: # 57.08 k (3.21 GB), speed: 1.2 i/s (126.55 KBps)]
 [succeeded: # 57.08 k (3.21 GB), 1.2 i/s (126.55 KBps), 1.1 / 2.6 / 0.3 / 0.9]
 [failed: # 8 (1.47 MB), 0.0 i/s (1.54 KBps), 0.0 / 0.0 / 0.0 / 0.0]
-----------------------------------------------------------------------------------------------------
Date: 4/9/2025 5:47:30 AM, Tick Count: 25236859 (07:00:36.8590000), Size: 1.11 KB
Process: AeXSvc (6416), Thread ID: 261, Module: Altiris.NS.dll
Priority: 4, Source: PerformanceSensor

They are using 100 threads (look above under @100 in bold) for each event queue.
This is too many NSEs to be processed at the same time and brings problems, not performance improvements.
More threads - more deadlocks.

If you look at their [SYSTEM] log entry:

[SYSTEM]
 [ns cpu: 3%, ram: 9.25 GB / 14%, uptime: 6:47:20]
 [ns machine: SMPNS01 (V), ram: 64.00 GB, cpu: 32x2394Mhz, assembly: 8.7.3391.0, versions: 8.7.3391.0 (4/30/2024) / 8.7.1273.0 (5/4/2023) / 8.6.3268.0 (3/8/2022) / 8.6.1119.0 (2/18/2021) / 8.5.5713.0 (11/16/2020)]
 [ns os: Microsoft Windows Server 2016 Standard, 10.0.14393, en-US, TZ -420]
 [pc physical: 41179, virtual: 94, managed: 25085, policied in 24h: 17687, in cem: 9394, ps: 25, ts: 26]
 [licensing status: Expired: 3, Ok: 6]
 [fixes: 8.5 POST RU4, 8.5 POST RU4 ECV (v2), 8.5 POST RU4 ULM (v1), 8.5 POST_RU4 SMA_SMP (3), 8.6 POST_RU2 SMA_SMP (1), 8.6 POST_RU2 SMP_TS (1), 8.7 POST_RTM SMA_SMP (4), 8.7.2 POST SMA_SMP (9)]
-----------------------------------------------------------------------------------------------------
Date: 4/9/2025 5:47:30 AM, Tick Count: 25236796 (07:00:36.7960000), Size: 914 B
Process: AeXSvc (6416), Thread ID: 261, Module: Altiris.NS.dll
Priority: 4, Source: PerformanceSensor

This SMP server has a Total CPU count of 32 (see above under cpu: 32x2394Mhz entry), so a suggestion would be to set threading like this:

priority  queue : 4
fast queue : 4
default : 4
slow: 2
large: 2

Total: 16 threads, which is half of the system power (32 CPUs). 

 

5. Check the index fragmentation on common EventQueue tables

In some scenarios, especially in environments where there are constant Inventories being collected or heavy NSE traffic on a daily basis, EventQueue tables may need to be re-indexed.
Make sure you have a SQL Maintenance Plan for the Symantec _CMDB database is in place and it fits the needs of your environment.

Common KB articles suggested are:

SQL Server Implementation Best Practices and Performance Tuning
SQL Maintenance script for the Symantec Management Platform database
Maintenance of your CMDB - analyzing the defragmentation level of CMDB and performing the defragmentation

Some of the tables that you should watch over their index fragmentation are:

EventQueue
EventQueueEntryMetaData
EventQueueStatus 

Especially these two:

EventQueueEntry
EventQueueProcess

If you have slow NSE processing, you could try to use SQL Server "Rebuild All" and "Reorganize All" functionality on the indexes used by our Event Queue tables.

Note: In some situations, Index fragmentation can help just a little for a short period of time. Insignificant improvement in case of high volume of NSEs from clients when SMP processes them in large quantities. This is because NSEs are added and removed right away. However, that small improvement can help you to get a good number of NSEs to be processed and get you out of a bottleneck.

6. Review the current queue status

Check if by chance there is a discrepancy on how many NSEs are in the database with what the actual EventQueue has. If you see that the EventQueue (under C:\ProgramData\Symantec\SMP\EventQueue\EvtQueue)  has for example 10,000 NSE files but in the database, it shows that there is more processing, usually indicates that something went out of sync. That maybe the SQL server is not processing incoming NSEs or it is hung.

You can use a query like this one to have an idea of how many NSEs are in the queue:

--How many NSEs are referenced on the database

select count (*) from EventQueueEntryMetadata

Another test is to see if by chance an NSE is stuck in the database for processing. Use the following query to see if that is the case. For example, if I run this query about once every minute:

select min(id) as Oldest, max(id) as Newest
from EventQueueEntry

If the "oldest" ID is not moving, then it is most likely something is stuck. If that is the case, it is time to follow the recommendations from Troubleshoot NSE Processing in 8.x where you will need to stop services and truncate tables so the NSEs in the queue can start processing again.

7. Check if there is a possible issue with Disk I/O

In most cases, you may need to use Perfmon on your SMP and/or SQL server and analyze how the disks are performing. Issues with the RAID used, disk speed, type of disk, etc could add slowness in how the NSEs are written on the physical queues and how that data is read.
As well, it is essential that common practices like disk defrag are in place.

Refer to Microsoft documentation on Perfmon and how to analyze Disk usage.

As well similar KBs like these ones:

Create a Performance Monitor counter set for Altiris support
Common Performance Monitor counter thresholds
Creating a Performance Monitor counter set for Notification Server

NOTE: Another area to check is storage drivers. Especially if "Page I/O Latch" is to high.
 

If the SQL Server is a VMWare virtual machine, check that VMTools are up-to-date

8. Lower the NSE Count that is allowed in the EventQueue folder on the SMP

Having many hundreds of thousands of NSEs in the EventQueue will slow down processing as the NS has to search through the Database Tables, and also the file. More than 50k is not recommended due to the slowness.

NOTE: MaxFileQSize (Default 20,000) has been deprecated and is no longer used to limit the size of the Event Queue. Use Core Setting - EvtQueueMaxCount instead.

9. Reviewing if Persistent Connections (websockets) are used

If Persistent Connections / Time Critical Management / Endpoint Management Workspaces / has been configured, please be advised that Persistent Connections uses a lot of CPU threads keeping connections opened on the SMP.  If you don't need Persistent Connections it's advised to turn them off.  If you want to use them, it's advised to make the following changes to the Core Settings in the Console (Settings > Notification Server > Core Settings). These items will show in the Console if you search the Active Settings for "msgsthreadpoolsize"

NOTE: It is recommended to make these changes to any system that is backing up, and is appropriate for an SMP with 32 CPUs. Keep Threads under 16 if SMP has 32 CPU.

Make the following changes:

    • MaxConcurrentPriorityMsgsThreadPoolSize  --> 4
    • MaxConcurrentFastMsgsThreadPoolSize      --> 4
    • MaxConcurrentDefaultMsgsThreadPoolSize   --> 4
    • MaxConcurrentLargeMsgsThreadPoolSize     --> 2
    • MaxConcurrentSlowMsgsThreadPoolSize      --> 2

10. Items that you should collect for troubleshoot this type of issues

Here are some ideas of things that should help Support and Engineering to have a better idea of what could be triggering a performance issue:

    • Copy of NSEs from C:\ProgramData\Symantec\SMP\EventQueues
    • There is a newer feature in the Console Core Performance.  Settings > Notification Server > Internals > Core Performance
      • This can be used to keep track of NSE Processing, resource usage, etc. 
    • Collect the evidence as the Altiris Administrator:
      • a) full NS Logs
      • b) profiling session (using Altiris Profiler) for some minutes when the issue is present
      • c) detailed description of hardware used to install SMP + SQL
      • d) results of performance monitoring of the SQL server:  RAM usage, number of instances on the server and their load, HDD queue depth, IOPS performance of temp-db, etc.
      • e) list of tasks/policies and their schedules, that can be a source of the NSE flood
      • f) for each virtualization environment - detailed info about resource preallocation, hardware used, hardware status, etc.

Additional Information