mysql data inconsistencies after purging counters table
search cancel

mysql data inconsistencies after purging counters table

book

Article ID: 243402

calendar_today

Updated On:

Products

CA API Gateway

Issue/Introduction

After restarting all nodes (2 DB + 6 processing). we are seeing a lot of errors related to duplicate entries from all these nodes: Any reason why we are getting this and how to resolve it. 

 2022-02-11T12:26:16.074-0500 SEVERE 1211 org.hibernate.util.JDBCExceptionReporter: Duplicate entry '<IP>-0462348ddc7eac5f04a5bfe3380fba83' for key 'countername' @timestamp:Feb 11, 2022 @ 12:26:16.179 @version:1 fields.AppName:APIGW host.hostname:hostname host.name:hostname log_ts:Feb 11, 2022 @ 12:26:16.074 log.file.path:/opt/SecureSpan/Gateway/node/default/var/logs/ssg_0_0.log log.offset:29,766,062 loglevel:SEVERE tags:APIGW, beats_input_codec_plain_applied _id:1fHS6X4BXYWilnkdQCbm _index:i.apigw-2022.02.11 _score: - _type:_doc

2022-02-11T12:18:18.120-0500 SEVERE 1338 org.hibernate.util.JDBCExceptionReporter: Duplicate entry '<IP>-0462348ddc7eac5f04a5bfe3380fb608' for key 'countername' @timestamp:Feb 11, 2022 @ 12:18:18.455 @version:1 fields.AppName:APIGW host.hostname:hostname host.name:hostname log_ts:Feb 11, 2022 @ 12:18:18.120 log.file.path:/opt/SecureSpan/Gateway/node/default/var/logs/ssg_0_0.log log.offset:19,953,685 loglevel:SEVERE tags:APIGW, beats_input_codec_plain_applied _id:ntLK6X4B55RKkUqj_njN _index:i.apigw-2022.02.11 _score: - _type:_doc

 

Environment

Release : 10.0

Component :

Cause

Caused by manual deleting entries in the counters table.  This needs to be handle more efficiently than truncating 

Resolution

The purging records can be done in policy/task.  Uploaded a policy/service that will work on Gateway 10.x using mysql 8

Created a backend service that be added as a scheduled task that will delete records from the counters table

The service configuration can be modified, the uploaded service will purge 4000 records at a time that have not been updated in 3 months, it will loop 50 times for a total 200,000.

Restart of the gateway to clear memory should not be needed since look to be stale entries 

HOW TOO:

There are objects that need to be created (JDBC connection to the SSG DB, the service, then the task)

Create Policy

Import XML

Modify JDBC assertions to use the JDBC connection created in step 1 (Choose one connection)

Task -> Global Settings -> Manager Scheduled Tasks

Add 

Additional Information

SSG-Counters that performs optimize table counters 

Default: Delete 5000 records iterated 10 times (loop)   For one run it deletes 50,000 rows that satisfies rows not updated to the last_update field for 15 days 

DELETE FROM counters WHERE last_update / 1000 < UNIX_TIMESTAMP(NOW() - INTERVAL 15 DAY) LIMIT 5000

These settings are configurable and should be adjusted for your environment.  The goal would be for the task to delete all rows older than 10 days 

Context variable;  counter_purging_iterations

Context variable:  timeFrame  15  (days)

Context variable: limitValue 5000   (Controls how many rows to delete per “Run Assertions for Each Item:

Also include some Audit Details with timers to help with the number of records and how long the run takes 

 

Attachments

1695224076228__SSG_Counters.xml get_app