SRM records purge is not working as configured - archive event_staging
search cancel

SRM records purge is not working as configured - archive event_staging

book

Article ID: 255521

calendar_today

Updated On:

Products

CA Spectrum DX NetOps

Issue/Introduction

In the archive preferences, we have the purge option set, and the retention time is configured to 90 days: 

However, as we can see in the above image, there is still data older than 90 days remaining in DB.

Looking at the SRM tomcat logs, we can see the following error being displayed repeatedly:

------------------------------

Nov 12, 2022 00:00:00.008 (CrystalOnlyUserManager) (SRMArchiverLog) - (INFO) - SRM Event threshold data archiver task started

Nov 12, 2022 12:00:00 AM (CrystalOnlyUserManager) (org.quartz.core.JobRunShell) - (ERROR) - Job DEFAULT.eventArchiverJob threw an unhandled Exception: 
org.springframework.scheduling.quartz.JobMethodInvocationFailedException: Invocation of method 'run' on target class [class com.aprisma.spectrum.app.repmgr.dc.archiver.ThresholdDataArchiver] failed; nested exception is org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [INSERT INTO archive.archive_activity_log (  log_id,  logger_name,  process_description,  process_start_time,  process_end_time,  num_records_loaded,  parent_log_id ) VALUES ( NULL, ?, ?, NOW(), NULL, NULL, ? )]; Column 'process_end_time' cannot be null; nested exception is java.sql.SQLIntegrityConstraintViolationException: Column 'process_end_time' cannot be null
 at org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:276) ~[spring-context-support-5.3.23.jar:5.3.23]
 at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75) ~[spring-context-support-5.3.23.jar:5.3.23]
 at org.quartz.core.JobRunShell.run(JobRunShell.java:202) [quartz-2.3.2.jar:?]
 at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.3.2.jar:?]
Caused by: org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [INSERT INTO archive.archive_activity_log (  log_id,  logger_name,  process_description,  process_start_time,  process_end_time,  num_records_loaded,  parent_log_id ) VALUES ( NULL, ?, ?, NOW(), NULL, NULL, ? )]; Column 'process_end_time' cannot be null; nested exception is java.sql.SQLIntegrityConstraintViolationException: Column 'process_end_time' cannot be null

------------------------------

Environment

Release : 22.x

Cause

A problem when inserting a record into the SRM Archive DB is preventing the purge to work fine.

Resolution

The solution is to drop/recreate a new/empty SRM archive DB. Since customer is using the purge option instead archive, the archive DB is no more used/needed (better to confirm with him).

1- In OC/SRM machine, connect on Mysql prompt:

(Windows):

bash -login

cd $SPECROOT\mysql\bin\

./mysql -uroot -p<password for mysql>


(Linux): 

cd $SPECROOT/mysql/bin/ 

/mysql --defaults-file=../my-spectrum.cnf -uroot -p<password for mysql>

 

2.Execute command 

use archive


3. Execute the following command:

mysql> drop database archive;


3- Restart the OC tomcat service;


This will recreate the archive DB, but now empty. This is required to SRM to work.

4- The new purge attempt should run at next midnight.

Additional Information

Once restarted the OC tomcat, you may get the following exception:

Dec 27, 2022 12:57:38.711 (SRM/Startup/Container) (SRMArchiverLog) - (ERROR) - Exception creating archive staging table:event_staging
Caused by: org.springframework.jdbc.BadSqlGrammarException: StatementCallback; bad SQL grammar [ALTER TABLE archive.event_staging ENGINE = MyISAM]; nested exception is java.sql.SQLSyntaxErrorException: The storage
 engine for the table doesn't support native partitioning
        at org.springframework.jdbc.support.SQLExceptionSubclassTranslator.doTranslate(SQLExceptionSubclassTranslator.java:93)

that is coming because the table does not exist and SRM will try to create it like  the  reporting.event table that is an innodb partitioned table.

as all archive tables are in MyISAM including the event_staging table you need to drop it and re-create it as below:

1. Stop Tomcat service.

2. Use the archive database:

mysql> use archive

3. Drop the archive.event_staging table:

mysql> drop table archive.event_staging;

4. Create an event table 

CREATE TABLE `event_staging` (
  `event_key` bigint unsigned NOT NULL AUTO_INCREMENT,
  `event_id` binary(26) NOT NULL,
  `landscape_h` int unsigned NOT NULL,
  `model_key` int unsigned NOT NULL DEFAULT '0',
  `time` datetime NOT NULL,
  `type` int unsigned NOT NULL,
  `creator_id` int unsigned NOT NULL,
  `event_msg` text,
  `server_precedence` int unsigned DEFAULT '0',
  PRIMARY KEY (`event_key`,`time`),
  UNIQUE KEY `event_id_model_key_idx` (`event_id`,`model_key`,`time`),
  KEY `landscape_idx` (`landscape_h`),
  KEY `model_key_idx` (`model_key`),
  KEY `time_idx` (`time`),
  KEY `landscape_and_time_idx` (`landscape_h`,`time`),
  KEY `type_idx` (`type`),
  KEY `creator_id_idx` (`creator_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb3 MAX_ROWS=1000000000 AVG_ROW_LENGTH=500;

4. Restart the tomcat.