From the user’s space / org where the data flow SI was created, run the following command:
cf t -o p-dataflow -s $(cf service dataflowSI —guid)
Where
dataflowSI is the name of the Dataflow service instance.
You should now be in the org / space of the data flow and skipper backing apps. Next, run the commands below. In this example, I have increased the value to
30m. If the error is in the Dataflow logs, you would execute these commands:
cf set-env dataflow JAVA_OPTS '-XX:MaxDirectMemorySize=30m’
cf restage dataflow
If the error is in the skipper logs, you would execute these commands instead:
cf set-env skipper JAVA_OPTS '-XX:MaxDirectMemorySize=30m’
cf restage skipper
If you continue seeing memory errors after increasing the direct buffer memory, you can enable
DEBUG level logging as shown below and contact Tanzu Support with the Dataflow and/or Skipper logs.
$ cf set-env dataflow JAVA_OPTS '-Dlogging.level.cloudfoundry-client=DEBUG -Dlogging.level.reactor.ipc.netty=DEBUG'
$ cf restage dataflow
$ cf set-env skipper JAVA_OPTS '-Dlogging.level.cloudfoundry-client=DEBUG -Dlogging.level.reactor.ipc.netty=DEBUG'
$ cf restage skipper