Consolidating History Data into a Central SCDS
search cancel

Consolidating History Data into a Central SCDS


Article ID: 50684


Updated On:


JARS JARS Resource Accounting JARS SMF Director



What is CA's recommended approach with the following production/QA z/OS environment, where there is a requirement to have all SMF data loaded / forwarded into a central SMFD SCDS?

  1. There are multiple SYSPLEXes - on for production LPARs (PRODPLEX) and another for three QA LPARs (QAPLEX).

  2. A single non-SMS DASD volume, called QSHR00, is shared between the QASPLEX LPARs and the PRODPLEX, however the catalogs are separate and any QSHR00 dataset allocation from PRODPLEX must have VOL=SER=QSHR00 specified.

  3. No JES connectivity between QASPLEX LPARs and PRODPLEX LPARs, however it is possible to FTP in order to submit JCL jobstreams using z/OS FTP command "site filetype=jes".

  4. The PRODPLEX has eight (8) active LPARs, one being the most active with SMF dumping at about 8-10 minute intervals during a business-day.

  5. All SMF data for the PRODPLEX and QASPLEX LPARs must be accessible from the central SCDS and SMFD - this can be done with any type of "SMF data forwarding" as suggested by CA.

  6. Within PRODPLEX, there is a SYSA LPAR which has a limited DASD pool shared with the other PRODPLEX LPARs - specifically for data-forwarding (this is SMS-managed as well).

We are considering having SMFD installed on each of the QASPLEX LPARs, also having a SCDS for SYSA, and then the centralized SCDS for the other PRODPLEX LPARs.

What we want to understand, with CA's guidance, is how is SMFD best implemented when managing an "SMF data forwarding" scenario, to mean that we have the QASPLEX LPARs, and also the SYSA LPAR, each of which will have their own SCDS, but then the SMF data being dumped (on time-interval basis with "I SMF" commands) and archived with each LPAR's SMFD for those "remote" LPARs need to have their SMF data loaded into the central SCDS (possibly with
periodic SMFD executions during the day - here is where we need guidance?) and how do we go about supplying the correct SOURCE/DUMP parameters and
associated input DD allocations when doing the remote-forwarding of SMF data.

We will definitely be able to allocate datasets for SYSA (on the SMS-managed, cataloged DASD pool, accessible from PRODPLEX) and also for the QASPLEX LPARs on the QSHR00 DASD volume (with no space problems expected - we can manage on the QSHR00 DASD volume (with no space problems expected - we can manage this requirement).

So, has CA worked with clients who need to "push" SMF data from other detached LPARs forward to a central SCDS, so the SMF data can be made available on the centralized PROD-PLEX, for MICS and other SMF-related application processing, under CA SMF DIRECTOR control, most often with EXTRACT processing and the MICS interface? Here's where we are looking for guidance, particular with the remote LPAR SMF data "push" to a central SCDS.


What the summary is describing can be boiled down as follows. The data center has a central production system and one or more satellite systems. There are a limited number of DASD devices to which both the production and the satellite systems have write access (in the case described, there was just one). The customer wishes to consolidate all the SMF data collected from the satellite systems into the data base referenced by the production system's SCDS.

The course that we recommended, which the customer implemented, was as follows. He included in his SMF dumps for the satellite systems a SPLIT statement to write ALL the SMF data to a SPLIT file located on the volume to which both his production system and his satellite systems had access. The SPLIT file DD statement was coded with DISP=MOD. Each satellite system has its own SPLIT file for this purpose. Periodically, a job on the production system reads the SPLIT file for each satellite system into a SOURCE DUMP, which thereby moves the data into the production data base. That job then clears the SPLIT files.

The history files on the satellite systems can thereby be set up with minimal retention periods, just long enough to be able to run an EXTRACT in the event of a SPLIT file failure.


Component: JARSSM