Symantec Directory: Sizing DXgrid datastore and RAM requirement
search cancel

Symantec Directory: Sizing DXgrid datastore and RAM requirement

book

Article ID: 50201

calendar_today

Updated On:

Products

CA Directory CA Identity Manager CA Identity Governance CA Identity Portal CA Risk Analytics CA Secure Cloud SaaS - Arcot A-OK (WebFort) CLOUDMINDER ADVANCED AUTHENTICATION CA Secure Cloud SaaS - Advanced Authentication CA Secure Cloud SaaS - Identity Management CA Secure Cloud SaaS - Single Sign On CA Security Command Center CA Data Protection (DataMinder) CA User Activity Reporting

Issue/Introduction

This technical document guides the user through process of how to estimate their DXgrid datastore size on disk as well as how much system memory (RAM) is required to host the data when DSA starts up.

Environment

Release: CAPUEL99000-12.5-Identity Manager-Blended upgrade to Identity &-Access Mgmt Ente
Component:

Resolution

For CA Directory, the optimal RAM and datastore (DXgrid db file) sizing is dependent on a number of factors. These factors include variables like the expected growth in data year on year as well as the amount of data which can be returned in a given request. As such, there is no specific formula or hardline approach as to what is required for CA Directory (DXgrid datastore).

Some best practices are as follows:

  1. Choose 64-bit hardware, OS and DSA.
  2. Follow the rules of thumb outlined below and verify they are consistent in a preproduction environment
  3. Where possible try to overspec the physical RAM - the additional RAM will still be used by the operating system to cache files (zdb, ldif, logs etc) which will improve overall performance.

In summary, we cannot really determine the size requirements based upon......
e.g. "12 million users with 14 attributes each with a max 40 chars per attribute"
The best we can do is to provide 'rules of thumb'.

A general approach is as follows:

  1. Create an LDIF that mimics what the user entries will look like. If this LDIF is going to also be used for performance testing try to avoid indexing highly repetitive attribute values. Utilities such as 'makeldif' can be helpful in generating the LDIF.
  2. Once you have an appropriate LDIF file, use dxloaddb command line tool in "dry run" mode (-n switch) as well as the "generate statistics" mode to provide you with an accurate disk sizing.
  3. Take the "Total Datasize in MB" figure and inflate it to include room for future growth. Then multiply this figure by 2.5 in order to get the size in RAM when all the attributes are fully indexed in memory. Try to round up to the next logical physical memory size.

For example, if the "Total Datasize in MB" figure from dxloaddb was 900, and I expect that the number of users (and hence data) will double in the next 3 years I would choose a datastore of 1800MB. Multiplying 1800 by the figure 2.5 I arrive at a provisional physical memory requirement of 4500MB. The next logical physical memory size is 8GB so this would be recommended physical size for the DSA.

NOTE:

  1. If not sure about 'dxloaddb' command line options, simply run 'dxloaddb --help' at the system prompt to get the syntax.
  2. If you already have an LDIF file to work with, you can start from step (2) above.

Once you perform a "dry run", you will notice the output something similar to:

DB Files Statistics:

Total Datasize in MB: 6052
Number of entries read: 12521481
Number of entries loaded: 12521481
Amount of db file padding in KB: 36851
Average number of entries per MB: 24489

This is the best method of calculating the size requirements of a DXgrid datastore.

IMPORTANT:

Remember to allow headroom in the DXgrid datastore, in order to allow for new users being added. The amount of headroom will depend on the expected growth as well as the amount of time you wish to leave before having to extend the datastore size (see dxextenddb).