Reaper file threshold

book

Article ID: 48820

calendar_today

Updated On:

Products

APPLICATION DELIVERY ANALYSIS SUPERAGENT CA Infrastructure Performance

Issue/Introduction

Description:

ISSUE

A customer is getting an error stating:

FilePumpManager: Fetching of data files from Harvesters will resume when the number of RPR files in input/staging directories(101)falls below ReaperFileThreshold (100).

DETAILS

You can increase that threshold value, but it probably won't help. The default threshold is 100 files.

The reason this happens is because the data pump in RA pulls files from the harvesters, but something causes the processing to stop to the DSA. Usually this is because of a datashare access issue, and so the DSA doesn't pull down the CSV files generated from those RPR files.

This eventually causes the data pump to stop processing new RPR files, but continues pulling them from the harvesters. That then leads to too many of them in the staging folder on the RA master as it keeps pulling new ones from the harvesters.

Once it hits the reaper file threshold, it stops pulling files from the harvesters altogether and data flow stops completely to the DSA. You'll still see 1 minute data in this scenario.

This causes RPR files to get backed up on the harvesters as well. Then when you finally do resolve the problem with the DSA, and the CSV files get pulled off the RA master, then the data pump will try to pull down all needed files from the harvesters again. If there are more than the threshold on all the harvesters by this point due to the previous problem, this will cause the data pump to hit the threshold again and you are back to square one.

Solution:

You can increase the threshold to something crazy, but you could also cause the drive to get filled up with new RPR files on the RA master, because it's pulling so many files at once. It really depends on how many harvesters, and how long it has been in this state and how many files are backed up.

What you need to do is move the RPR files out of the input folder on the harvesters to another location temporarily, and let the data pump process all the RPR files on the RA master to get below the threshold. Or you can move some RPR files on the RA master out of the staging folder to get it below that number as well. Then put them back as the data pump processes the files.

Once all the RPR files on the RA master are processed, it will try to pull new ones from all the harvesters. Now you put the RPR files that you moved on the harvesters or on the RA master back into the input/staging folder so the RA master can process them, in small chunks so that the data pump doesn't hit the threshold again.

Keep doing this until the RA master is able to get all files from all the harvesters, process them, and the DSA pulls down those CSV files and puts them in the DSA database. At this point, you are back to a normal file flow state and RA should work normally again.

We've seen this situation play out after RA upgrades where they had a DSA issue, or if they reset the password for the DSA credentials and that causes a problem with processing RPR files.

The reaper file threshold can be increased by opening the NetQoS.ReporterAnalyzer.PumpService.exe.config file located in:

D:\netqos\reporter\NetQoS.ReporterAnalyzer.PumpService\bin .

Open the file and look for this entry, and increase the value:

<add key="FilePumpManager.ReaperFilesThresholdPerHarvester" value="100"/>

Restart the RA data pump service.

Environment

Release: RAIB1H99000-9.1-Network Flow Analysis-Interface Bundle-Hardware
Component: