With the release of the CA CMDB Connector for z/OS, it is possible to do discoveries on mainframe components and import them into CA CMDB.
On very large mainframe sites, the number of configuration items and relationships to discover can number in the tens of millions. This is particularly the case when discovering down to component levels.
If this large number of discovered items is then imported into CA CMDB, the data load process can run for hours or days.
This problem is not unique to a mainframe environment and may exist on other platforms, but this is the scenario where large numbers of discovered items may be encountered due to the complex hardware nature of these sites.
The long run time is an expected result of a very large data load. Given enough time, the load will complete.
The solution is to manage and test the integration process. Questions such as:
Consider if it is necessary to load all of the data to CA CMDB. One design approach is to consider that just because data can be loaded to CA CMDB, doesn't mean that it should be loaded to CA CMDB. Considerations such as the practical business requirement, or using specific tools for specific purposes, come into play. In other words, is loading the data into CA CMDB an act which aligns itself with CMDB principles and business requirements? If the answer is no, then maybe the data does not belong in this repository.
On the other hand, if the answer is yes, then appropriate consideration should be given to planning out the very large data load well in advance.
For background material on the CA CMDB Connector for z/OS, see the product briefing notes at: http://www.ca.com/files/ProductBriefs/mp-33520-cmdb-connect-pb_web_201723.pdf