Team Center/Webview Sluggish

book

Article ID: 144892

calendar_today

Updated On:

Products

CA Application Performance Management Agent (APM / Wily / Introscope) CA Application Performance Management (APM / Wily / Introscope) INTROSCOPE DX Application Performance Management

Issue/Introduction

A very busy APM database can grow large very quickly. See https://ca-broadcom.wolkenservicedesk.com/external/article?articleId=143000  on what to generally do 

This Knowledge Document explains:
1) Why running sqltools did not reduce the DB size. 
2) Make suggestions what to do to stabilize the environment for the next 6 months. 
3) Explain how to monitor the team center sluggish/no data condition

Cause

Large and suboptimized database.

Environment

Release : 10.7.0

Component : APM Agents

Resolution

1) SQL Tools
The output provided seems to be entries from an old version of the product. These log messages reflect a bug which has been fixed.


The output does not mean the tool failed in its execution but was exposing an issue which we fixed. 

2) How to Stabilize APM for next 6 months.
- Upgrade to HF 54


- Clean up DB monthly

https://ca-broadcom.wolkenservicedesk.com/external/article?articleId=143000

vacuum full analyze appmap_edges;
vacuum full analyze appmap_vertices;
vacuum full analyze appmap_attribs;

Also clean up on CEM tables as appropriate.


- Cut back on preserving times


From 
introscope.apm.data.preserving.time=60 DAYS
To:
introscope.apm.data.preserving.time=30 DAYS

From 
introscope.apm.alert.preserving.time=30 DAYS
To:
introscope.apm.alert.preserving.time=15 DAYS


Other things worth mentioning are:

a) The Performance of the Cluster.  - Monitoring how the collectors are performing. Support needs to take a look at the perflog and other resources like CPU, memory, Response times, and possible log entries for errors.

Monitor Collector Load distribution. If agents are distributed and collectors are equally loaded. 

b) The Performance of the database - Monitor the size of the database and the rate at which it is growing. The following queries will be helpful to understand what is overwhelming the database. 

select count(*), type from at_evidences group by type;

select min(START_TIME),min(END_TIME) from apm.AT_EVIDENCES;
select min(START_TIME),min(END_TIME) from apm.AT_STORIES;

select count(*) from appmap_vertices;
select count(*) from appmap_attribs;
select count(*) from appmap_edges;

select bytes/1024/1024 MB from user_segments where segment_name='APPMAP_ATTRIBS';

 

The above queries will make Support understand how the DB is performing.


3) Explain how to monitor team center sluggish/no data condition

 

Team center sluggishness occurs for many hidden reasons. Most common ones include 

1) DB performance (Often visible in DB stalls)

2) Oversized tables 

3) Periodically fetching full maps due to update interval time, The larger the interval time, the more changes to the map, thus causing the lag and sometimes blank screens. This is fixed in later versions. 

4) Sluggish MOM busy with other collectors harvesting data. 

Additional Information

DB Performance:

We notice that many times the database was undersized or not properly configured. It is very important that we we use ample amount of available resources like RAM for caching underlying data (1GB). One should do health check first to see if database is configured properly. Over a period of time the shared memory gets exhausted thus causing sluggishness and often visible in the investigator as DB Stalls. Queries taking longer durations. This also applies to shared buffers which specifies how much memory is dedicated to the database. 

 

Secondly oversized tables. 

These include mostly the AT tables. (AT_EVIDENCES, AT_STORIES) and app map tables such as appmap_vertices, appmap_attribs appmap_edges etc. If these tables are huge it consumes significant amount of time retrieving from them. 

Configuring query log will record the lag. 

 

Fetching full map:

Periodic requests for ATC live mode map fetching is of two types, partial and full. Full map is requested requested when the difference between the versions is too large. If there are many changes happening then the version will be always significantly different and therefore full map is always sent. Lengthening the update interval could lead to this problem. This problem is fixed in the later versions.