Observed a robot is sending qos for cdm, sometimes every minute, sometimes more.
Upgraded to newer versions of cdm probe , but still the same is observed
eg: here is one disk drive from the DB. Samplerate is 720 (12mins), but as you can see the sampletimes are 13:05, 13:04, 13:03, 13:02 etc
table_id sampletime samplevalue samplestdev samplerate samplemax tz_offset
10388186 2020-05-25 13:05:41.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 13:04:30.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 13:03:58.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 13:02:49.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 13:00:29.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:59:58.000 76.14 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:52:31.000 76.18 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:51:58.000 76.18 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:50:49.000 75.97 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:48:29.000 75.93 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:47:58.000 75.93 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:45:30.000 75.93 0.00 720.00 100.00 -36000
10388186 2020-05-25 12:40:30.000 75.93 0.00 720.00 100.00 -36000
This seems to happen for most metrics in cdm. CPU/Disk/Mem/Uptime.
Release : 9.0.2/20.x/20.4.x
Component: UIM - CDM WITH IOSTAT
OS: Linux 2.6.32-754*
- cdm data originating from a cloned robot.
Observed cdm data in DrNimbus message sniffer even when the cdm probe is stopped, so this data is coming from a cloned robot in the environment.
Identify the clone and stop it / clear niscache of cloned robot to resolve the issue.