Re-building the NAS tables from scratch and populating them with the information from the database.db and transactionlog.db on the nas should resolve any GUI synchronization issues/performance issues.
It is possible to have multiple nas probes, especially for large deployments or remote sites for an MSP. In this case, only one nas probe can be the 'master' that provides alarm data to UMP USM portlet. Nas keeps a copy of all alarms in the database server for the sake of retrieval speed, but in some cases can come out of sync. This may cause problems when opening the nas probe GUI in Infrastructure Manager, because the conf_gui on the client end also checks to see if the two data sources (local nas flatfiles and SQL database) are in sync.
Release: UIM all versions
To clear the NAS SQL database tables:
1. Raw configure the nas probe, and change from activity_logging = no to activity_logging = yes if not set already and Save the change
2. Make sure the nas loglevel is set to at least '3'
3. Ensure NiS Bridge is enabled in the nas using the GUI
4. Deactivate nas probe
5. Issue the following queries in this order one at a time using the SLM portlet (recommended) in UMP or SQL Management Studio / SQL Workbench / database query tool of choice:
DELETE FROM NAS_VERSION; DELETE FROM NAS_ALARMS; DELETE FROM NAS_TRANSACTION_SUMMARY; DELETE FROM NAS_TRANSACTION_LOG; DELETE FROM NAS_NOTES; DELETE FROM NAS_ALARM_NOTE;
6. Activate nas probe
7. Wait about 10 minutes and then check the size of the tables as before, and check if the NAS tables have grown to a large size again. You can use the nas GUI to create a test alarm on the status tab.
8. Open the nas GUI to make sure it opens up in a reasonable time without getting stuck synchronizing NiS bridge message.
Note that the nas will repopulate the tables based on the information from its local database.db so no information should be lost, this will cause a "sync" of the alarm state across systems.
If the tables have re-grown to the same size as before then attach the nas.log and _nas.log immediately to the case and re-contact support to dig deeper. (NOTE: When the issue is resolved, remember to set your log level back to 1)
If the above solution doesn't resolve the problem you may need to start by setting aside (renaming) the following nas db files then continue with the rest of the solution described above:
- database.db (live alarms)
...especially if the transactionlog.db is greater than 300 MB in size.
Note also that you may need to decrease the nas transaction summary setting in the GUI if it is above the out of the box defaults or if your setting simply results in a large transactionlog.db file.
9. Lastly, if the NAS GUI is stuck on synching you can use the Probe utility and run the get_info callback. To use the Probe utility select the nas probe and press Ctrl-P to open it. If you see a variable called "Ready" with a value of "1" this means that the nis bridge sync has completed and the GUI is still hung therefore it is safe to 'End Task' (conf_nas.exe) or kill the GUI process if you need to.
- The inventory of alarms being sent to any one nas probe can be significantly reduced through the use of secondary nas'es - as you'll have each secondary nas probe managing a 'subset' of the overall alarm inventory.
- Make sure that no custom probes are generating unnecessary noise / traffic to the nas increasing its overhead. To analyze any custom probe/script take a close look at what the probe/script is doing in terms of alarms processing and decide if its optimal or not. In one case we found a ton of clear messages being sent unnecessarily, which added to nas overhead. nas AO was auto-acknowledging them and adding unneeded events to the transactionlog.db.
- If you choose to deploy remote/secondary nases to cut down on the primary nas overhead, once the secondary nas probes have been distributed and the Auto Operator logic migrated to pre-processor logic (as needed), then you'll need to set up forwarding / replication rules on each nas probe:
a. On each secondary nas probe, Setup -> Forwarding / Replication, create a new rule to forward 'All events to destination (one direction), with the destination alarm server being the primary nas probe.
b. On the primary nas probe, create a similar replication rule to forward 'As event responder' back to each secondary nas probe. (In reality, these replication queues should be built automatically from step 'a', but double-check that they're constructed correctly).