CCS Single versus Multiple Asset Collection, Evaluation, and Reporting (CER) Workflow Differences

book

Article ID: 181002

calendar_today

Updated On:

Products

Control Compliance Suite Exchange Control Compliance Suite Windows

Issue/Introduction

 

Resolution

This article helps you understand the workflow differences between single and multiple asset jobs.

Note: For ease of understanding, this article only covers a single UNIX asset type.
This article covers the following topics to help understand the workflow:
 
 
Assets or asset groups for a selected standard are scoped to a collection job. The assets that are relevant to the selected standard form the asset scope for that standard.
Note: Data collection is check-based.
The data collection is performed in the following manner:
  1. Only those fields that are part of the data collection query, which is contained in the check or algorithm definition are selected for collection.
  2. The fields of an entity spanning multiple checks are combined in a single data collection query. Depending on the query combination logic used, there could be one or more queries per entity (datasource) per collection job.
For example:
Two checks that use the same entity might scope to two different target types. One check can scope to AIX 5.x machines and the other can scope to AIX 6.1 machines. This is very typical of patch queries that can be version-specific. Such queries are not combined and the relevant applicable assets are scoped accordingly.
  1. All applicable assets are scoped in the query scope section. The maximum number of assets scoped to a query depends on which of the following limit is reached first:
  • The length of the query (affected by filters, number of assets and number of fields) - 512 KB.
  • The default asset batch size that is set in the DPS settings.

Assets Batch Size GUI 

If any of the limits are reached, then a new query with the same definition is created to scope to the remaining set of assets. This is typical of large jobs.
  1. The queries are run by the RMS data collector and the same is returned to the DPS collector.
  2. The data is returned in the form of reply XMLs that map to the specific queries that are being executed.
  3. The returned data contains successful data rows and/or error/warning messages. The rows and/or error/warning messages are as follows:
    • Messages of type information or warning are bypassed and displayed in the Failures tab of the data collection job.
    • If an error or critical message appears, the asset association to the error is used to determine that due to a catastrophic failure the data collected from the asset might be incomplete leading to ‘inaccurate compliance’.
    • If an asset returns error or critical severity level message from RMS, the data collected is not stored in the B_Imports table in the Production DB. Future evaluations against this asset are run on older data or if there is no data present, this asset is not evaluated against the given standard.
    • The same asset can have a successful collection against one standard in the job but not against the other depending on where the failures occur. For example, an error can occur in a datasource that is queried by one standard and not another
  1. Data returned is stored in a zipped binary blob know as query result set (QRS). For each asset in a successful collection job, a new row is added to B_Imports. The zipped binary itself is a collection of query results sets.
  2. A manifest xml maintains a mapping between the checks and the QRS in the zipped binary that contains the data for the check. The manifest is contained in the zipped binary.
Successful evaluation depends on the presence of collection data in the B_Imports table for that asset for the standard against which it is being evaluated.
  1. Evaluation against a single asset first checks for the presence of the data for that standard in the B_Imports table. If multiple assets are scoped to the job, the same process is repeated for every asset against every standard scoped in the job
  2. If the data collection data is absent, an error indicating the same is shown in the Failures tab of the job.
  3. When the collected data is found, the asset is then evaluated against the checks in that standard on the DPS evaluator.
  4. During evaluation there is no combination of assets. For example, if there are 10 assets in the job to be evaluated against 2 standards, 20 sub jobs will be created and each DPS Workerprocess thread handles one job.
  5. If there are multiple evaluation data processing servers then the evaluations jobs created by the load balancer are spread across the evaluation data processing servers is in a round robin manner.
  6. The evaluation result set (ERS), is then stored in the database in the R_AssetSummary, R_AssetStandardSummary, R_Checks and the R_CheckResults tables.
  7. Single or multiple assets do not affect the data storage because it is stored based on Evaluation ID, Target ID, Check ID, Standard ID, Standard Version and Check Version to form a unique record for every check that was evaluated.
  8. The entry date identifies the latest evaluation information.
Reporting synchronization starts at the end of the evaluation job.
  1. There is no difference between how a single asset and a multiple assets are synchronized.
  2. The evaluation extractor extracts all the job specific evaluation results and prepares XML data for synchronization.
  3. The same is synchronized into the reporting database in batches and is independent of the asset count.
  1. Large number of unknown outcomes in the evaluation result:
    • Verify the data collection date from the standard-based view in the evaluation result viewer.
    • N/A indicates that no successful data collection has happened. Such assets return a “Failed to retrieve collected data from SQL” error in the Failures tab.
    • Older data collection date points to potentially obsolete data and can contribute to unknowns.
  1. If the data collection date of a standard for an asset shows up as N/A in the evaluation result viewer:
    • Check the Failures tab of the data collection or CER job.
    • Group the failure messages by ‘State’.
    • Any critical or error level messages result in the data collection for that asset being rejected.
    • Look for “Asset X is unavailable. Verify that it is available or reachable’ messages in the collection job and trace asset errors for that asset in the same job. This helps reduce the unknowns in the evaluation results due to failed data collection.
    • All warnings and information messages are ignored.

Attachments