Spectrum 10.4.x, 21.2.x
The CA Spectrum documentation explains how the Spectrum Trap Storm detection works under the "How Trap Storm Detection Works" section:
Broadcom TechDocs : CA Spectrum 21..x - Trap Management Subview
What it does not explain is the underlying code used to make that determination.
According to the references noted above, you can enable the trap storm detection at your SpectroSERVER or at the level of a modelled device. When devices that are modelled in CA Spectrum send more than 20 traps per second, you must adjust traps_per_sec_storm_threshold so that trap storm detection does not limit the ability to receive traps.
When traps received from any device reach the configured thresholds, the SpectroSERVER identifies this rate as a trap storm. The SpectroSERVER stops handling traps from that device and traps from other devices are not blocked. SpectroSERVER trap storm detection logic is based on each IP address of an unmanaged or a managed device (trap source) that sends traps to SpectroSERVER. As a result, you can configure each device to send traps to the SpectroSERVER at the appropriate rate."
in_storm = ( sum/TrapStormLength >= trap_storm_size ) ? TRUE : FALSE;
The "sum" is the number of traps received over a time period. Using the above formula above and the default values for traps_per_sec_storm_threshold and TrapStormLength, if the device received 100 (sum) traps in 3 seconds, the calculation would be as follows:
100/5 >=20
In the above scenario, even though the sample of traps was received over a 3 second period, according to the formula used, the average number of traps is equal to or exceeds 20 traps per second over a 5 second period so Spectrum will detect a trap storm, assert an alarm and stop processing traps for that device until the rate falls below the configured parameters.