Shaft service will not run
search cancel

Shaft service will not run

book

Article ID: 223728

calendar_today

Updated On:

Products

Security Analytics

Issue/Introduction

The shaft process was updated with a new routine to better identify applications.  This exposed a weakness in our packet to flow code.

 

 

Environment

Release : 8.2.2-55137

Resolution

An updated shaft process for the 8.2.2 release is available.  Call support for the update.

The fix is in the 8.2.5 release.

Additional Information

[root@yourhost ~]# systemctl status solera-shaft.service
● solera-shaft.service - Solera Shaft Service
   Loaded: loaded (/etc/systemd/system/solera-shaft.service; enabled; vendor preset: disabled)
   Active: failed (Result: signal) since Fri 2021-09-10 18:49:11 EDT; 2 days ago
  Process: 13685 ExecStopPost=/bin/sh -c systemctl unset-environment SHAFT_HIGHEST_FLOW_ID (code=exited, status=0/SUCCESS)
  Process: 13181 ExecStart=/usr/sbin/shaft $SHAFT_OPTIONS -S $SHAFT_HIGHEST_FLOW_ID (code=killed, signal=SEGV)
  Process: 13167 ExecStartPre=/bin/sh -c /usr/bin/systemctl set-environment SHAFT_HIGHEST_FLOW_ID=$(/sbin/calculate_high_flow_id) (code=exited, status=0/SUCCESS)
  Process: 13004 ExecStartPre=/usr/local/bin/clean_shared_memory.sh (code=exited, status=0/SUCCESS)
  Process: 13001 ExecStartPre=/bin/sh -c if [ ! -d $FLOWS_DIR ]; then mkdir -p $FLOWS_DIR; else /bin/true; fi (code=exited, status=0/SUCCESS)
  Process: 12993 ExecStartPre=/usr/sbin/shaft -L (code=exited, status=0/SUCCESS)
 Main PID: 13181 (code=killed, signal=SEGV)

Sep 10 18:49:05 yourhost shaft[13181]: (sa.core.shaft.rules) Found mask of 0 sub mask = 0 for Simple rul...d64a3
Sep 10 18:49:05 yourhost shaft[13181]: (sa.core.shaft.rules) adv attribute mask count for rule = 5c86ecd...false
Sep 10 18:49:05 yourhost shaft[13181]: (sa.core.shaft.rules) RULE CLASS [0] : RULE 5c86ecdc-15e4-488d-9b...ULLY.
Sep 10 18:49:05 yourhost shaft[13181]: initializing interface 5 0 reindexing
Sep 10 18:49:06 yourhost shaft[13181]: *** FAULT *** pgm=shaft sig=SEGV addr=0x7de8f5e01000 code=2 (perm...nied)
Sep 10 18:49:06 yourhost shaft[13181]: SEGV --- begin stack ---
Sep 10 18:49:06 yourhost shaft[13181]: SEGV frame 00: +0x00084 shaft::sa::sys::crashlog::on_signal(int, ...oid*)
Sep 10 18:49:11 hopstname systemd[1]: solera-shaft.service: main process exited, code=killed, status=11/SEGV
Sep 10 18:49:11 yourhost systemd[1]: Unit solera-shaft.service entered failed state.
Sep 10 18:49:11 yourhost systemd[1]: solera-shaft.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@yourhost ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/sda9                        7.6G  3.7G  3.6G  51% /
devtmpfs                         126G     0  126G   0% /dev
tmpfs                            126G   43G   84G  34% /dev/shm
tmpfs                            126G  4.1G  122G   4% /run
tmpfs                            126G     0  126G   0% /sys/fs/cgroup
/dev/sda2                        1.5G   34M  1.4G   3% /boot
/dev/sda6                        1.9G  249M  1.6G  14% /gui
/dev/sda4                        7.6G  2.4G  4.9G  33% /ds
/dev/sda8                        7.2T  2.3G  6.8T   1% /home
/dev/sda3                         68G   16G   49G  24% /var
/dev/mapper/captureVG-captureLV  240T  228T   12T  96% /pfs
/dev/mapper/indexVG-indexLV       44T   37T  7.3T  84% /var/lib/solera/meta
gaugefs                          240T  228T   12T  96% /etc/solera/flows
tmpfs                             26G     0   26G   0% /run/user/0
tmpfs                             26G     0   26G   0% /run/user/48
[root@yourhost ~]# scotus restart
Usage: scotus [-iqcnvDh] [<action>]
        [-i]            ignore return values on commands
        [-q]            quiet flag
        [-c]            also call solera service 'clear'
        [-n]            show commands but do not execute
        [-v]            verbose (show commands)
        [-D]            debugging output, show even more commands
        [-h]            show this usage
        <action>        start|stop [Default: stop]
[root@yourhost ~]# scotus stop
scotus: stop Solera and related services
root: scotus: stop Solera and related services
/bin/systemctl stop monit
/bin/systemctl stop shine.timer
/bin/systemctl stop adm-connector
/bin/systemctl stop solera-reindexerd
/bin/systemctl stop offboxd
/bin/systemctl stop tonicd
/bin/systemctl stop httpd.service
/bin/systemctl stop crond
/bin/systemctl stop solera-state
/bin/systemctl stop solera-extractord.service
/bin/systemctl stop solera-pcapexport
/bin/systemctl stop solera-ipfixexport
/bin/systemctl stop solera-ruleEngine
/bin/systemctl stop solera-adl
/bin/systemctl stop solera-gui
/bin/systemctl stop solera-orcfwd
/bin/systemctl stop solera-orcuploader
/bin/systemctl stop solera-mswd
/bin/systemctl stop solera-shaft
/bin/systemctl stop solera-gaugefs
/bin/systemctl stop solera-captured
/bin/systemctl stop solera-fmerge
/bin/systemctl stop solera-accolade
/bin/systemctl stop solera
/bin/systemctl stop prelert-engine
umount /var/lib/solera/meta
umount: /var/lib/solera/meta: target is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
scotus: command 'umount /var/lib/solera/meta' failed with exit value '32'
root: scotus: command 'umount /var/lib/solera/meta' failed with exit value '32'
[root@yourhost ~]# scotus start
scotus: start Solera and related services
root: scotus: start Solera and related services
/bin/systemctl start  prelert-engine
/bin/systemctl start  solera
/bin/systemctl start  solera-captured
/bin/systemctl start  solera-accolade
/bin/systemctl start  solera-gaugefs
/bin/systemctl start  solera-shaft
/bin/systemctl start  solera-mswd
/bin/systemctl start  solera-fmerge
/bin/systemctl start  solera-orcfwd
/bin/systemctl start  solera-orcuploader
/bin/systemctl start  solera-gui
/bin/systemctl start  solera-adl
/bin/systemctl start  solera-ruleEngine
/bin/systemctl start  solera-ipfixexport
/bin/systemctl start  solera-pcapexport
/bin/systemctl start  solera-extractord
/bin/systemctl start  solera-state
/bin/systemctl start  crond
/bin/systemctl start  httpd
/bin/systemctl start  tonicd
/bin/systemctl start  offboxd
/bin/systemctl start  solera-reindexerd
/bin/systemctl start  adm-connector
/bin/systemctl start  shine.timer
/bin/systemctl start  monit
scotus: start completed
root: scotus: start completed
[root@yourhost ~]# ^C
[root@yourhost ~]# systemctl status solera-shaft.service
● solera-shaft.service - Solera Shaft Service
   Loaded: loaded (/etc/systemd/system/solera-shaft.service; enabled; vendor preset: disabled)
   Active: activating (start-pre) since Mon 2021-09-13 08:29:13 EDT; 2s ago
  Process: 18653 ExecStopPost=/bin/sh -c systemctl unset-environment SHAFT_HIGHEST_FLOW_ID (code=exited, status=0/SUCCESS)
  Process: 18036 ExecStart=/usr/sbin/shaft $SHAFT_OPTIONS -S $SHAFT_HIGHEST_FLOW_ID (code=killed, signal=SEGV)
  Process: 17806 ExecStartPre=/bin/sh -c /usr/bin/systemctl set-environment SHAFT_HIGHEST_FLOW_ID=$(/sbin/calculate_high_flow_id) (code=exited, status=0/SUCCESS)
  Process: 18729 ExecStartPre=/bin/sh -c if [ ! -d $FLOWS_DIR ]; then mkdir -p $FLOWS_DIR; else /bin/true; fi (code=exited, status=0/SUCCESS)
  Process: 18722 ExecStartPre=/usr/sbin/shaft -L (code=exited, status=0/SUCCESS)
 Main PID: 18036 (code=killed, signal=SEGV);         : 18733 (clean_shared_me)
   CGroup: /system.slice/solera-shaft.service
           └─control
             ├─18733 /bin/bash /usr/local/bin/clean_shared_memory.sh
             ├─18736 /bin/bash /usr/local/bin/clean_shared_memory.sh
             ├─18738 find /dev/shm/var/lib/solera/meta/flows -type f ! -name *.dbread*
             └─18739 /bin/bash /usr/local/bin/clean_shared_memory.sh

Sep 13 08:29:13 yourhost systemd[1]: Starting Solera Shaft Service...
[root@yourhost ~]# systemctl status solera-shaft.service
● solera-shaft.service - Solera Shaft Service
   Loaded: loaded (/etc/systemd/system/solera-shaft.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2021-09-13 08:33:14 EDT; 15s ago
  Process: 25348 ExecStopPost=/bin/sh -c systemctl unset-environment SHAFT_HIGHEST_FLOW_ID (code=exited, status=0/SUCCESS)
  Process: 25618 ExecStartPre=/bin/sh -c /usr/bin/systemctl set-environment SHAFT_HIGHEST_FLOW_ID=$(/sbin/calculate_high_flow_id) (code=exited, status=0/SUCCESS)
  Process: 25376 ExecStartPre=/usr/local/bin/clean_shared_memory.sh (code=exited, status=0/SUCCESS)
  Process: 25372 ExecStartPre=/bin/sh -c if [ ! -d $FLOWS_DIR ]; then mkdir -p $FLOWS_DIR; else /bin/true; fi (code=exited, status=0/SUCCESS)
  Process: 25363 ExecStartPre=/usr/sbin/shaft -L (code=exited, status=0/SUCCESS)
 Main PID: 26096 (shaft)
   CGroup: /system.slice/solera-shaft.service
           └─26096 /usr/sbin/shaft -m 10 -S 112370590488

Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats packets: Total(t/d/f/w): 0/0/0/...0/0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats flows exported: Flows(total/del...: 0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats tonic: Total(rq/rp/d/f/nr/df/sf...0/0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats queues size(32): 0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats queues(32): Packets(l/m/a): 0(0...(0)/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats reindex flows(t/i/w): 0/0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats reindex packets: Total(t/w): 0/...t): 0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats reindex queues(32): Packets(l/m...(0)/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats reindex pause: Total(s/e/f): 0/...0/0/0
Sep 13 08:33:27 yourhost shaft[26096]: (sa.core.shaft.stats) Shaft stats cache: size: 87, pending delete...te: 0