Enable cdm to monitor disks under the robot instead of the remote server so the remote/NFS disks appear correctly in USM.
search cancel

Enable cdm to monitor disks under the robot instead of the remote server so the remote/NFS disks appear correctly in USM.

book

Article ID: 4154

calendar_today

Updated On:

Products

DX Unified Infrastructure Management (Nimsoft / UIM) CA Unified Infrastructure Management SaaS (Nimsoft / UIM) Unified Infrastructure Management for Mainframe

Issue/Introduction

This document shows how to enable the switch in cdm to have the remote/NFS disks appear with the same dev_id as the robot hosting the cdm probe so that the disks appear underneath the robot and not the remote server.  This will allow NFS mounted disks to appear in USM on the servers they are mounted on. The configurations listed will also ensure that no historic data will be lost. If QOS metrics / QoS remote mounted partitions on the machine in question appear in the database as per the SLM portlet, but do not appear in USM this is the solution.

Environment

UIM 7.x and later
cdm probe version 2.9 and later

Resolution

Metrics in S_QOS_DATA are linked to a dev_id by the ci_metric_id.  When the dev_id changes, the ci_metric_id for a metric will change; however, the QOS/source/target does not change, so the RN/HN/etc tables and table_id's will not change.  This ensures that the data will still be continuous.

The following steps should allow you to "re-align" the existing metrics with the new dev_id. 

1.   Mount the respective network devices in which we need to monitor.  The below screenshot highlights the network devices that have been configured.  There are also several local drives that have been mounted.

2.   We need to select allow_remote_disk_info key and configure it so that the dev-id is the same for both the nfs and local drives.

Highlight the cdm probe, press CTL Right-click, and select Raw Configure.
On the Setup tab set the key allow_remote_disk_info to ‘no’.

3.   Verify in Dr. Nimbus that the dev_id is the same for both local and nfs drives.  Search for the disks that have been configured to verify this information.  Please refer to the images below for additional information.

 4.   In order to monitor NFS File Systems for selected servers, please open the cdm probe GUI in IM and R-click on the NFS Share.  Select "Enable space monitoring" from the dialog and Apply the settings.  This will allow you to see the information you are looking for in USM. 


5.   Clear the niscache folder on the robot where the cdm probe was configured and restart the robot.


6.   Issue a query like the following: 

update S_QOS_DATA set ci_metric_id = NULL where probe = 'cdm' and qos like '%DISK%' and robot like '%robotname%'; 

where 'robotname' = the name of the robot, where we performed the cdm configurations



7.   Deactivate and activate data_engine 

For a few minutes after this, the metrics in question will disappear completely from USM and then once a new sample is recorded by data_engine, the CI_METRIC_ID will be refilled with the appropriate value linking the metric to the new dev_id, and the metrics should then appear underneath the proper robot/server in USM.

 

8.   Finally, in the UMP interface we will see the nfs drives being populated under the robot instead of the remote server.

Additional Information

If NFS mounted file systems do not appear in the list of discovered file systems in the cdm Configure GUI, make sure that they were not excluded by the Filesystem Type Filter defined for the cdm probe.



This is equivalent to the following key value setting in the <disk> section of the cdm.cfg file:

filesystem_type_regex = /^(?!.*?(tmpfs|nfs|iso9660)).*/


This regular expression excludes discovery of filesystem types that contain the string nfs.

Attachments