Both domain orchestrators in the cluster were stopped
Services were stopped and set to manual, the servers even restarted, and the load balancer was showing the nodes down as well.
On the installer screen for the load balancer configuration, the information for the node and the load balancer hostname were correct.
In the installation.log (located in /PAM/server/c2o/) the only information was the following:
<INFO 2017-05-17 14:32:54,956 DomainLoadBalancerScreen::info> Validating load balancer settings
<INFO 2017-05-17 14:32:54,956 OasisUtil::info> OasisUtil.checkHostName: nodeHostName: myserver.example.com
The problem was that there were network issues on the servers (Proc Auto and Load Balancer).
Using the following URL, and substituting the load balancer hostname
https://loadbalancerhostname/itpam/ServerConfigurationRequestServlet?type=c2oServer&clusterHostName= loadbalancerhostname &process=clusterServer
This should return a page that states the website cannot be reached.
What we saw instead was an error message from the load balancer that it was unable to resolve the address.
This was due to the network issues in this environment.
To get beyond this problem, in the hosts file on the primary Process Automation server, we added the IP address for a load balancer that was not having network issues with the hostname of the load balancer we are trying to connect with as:
192.0.2.1 myserver.example.com
So that the URL check will attempt the connection using 192.0.2.1 and get a positive response. In this case, the webpage is unavailable is a positive response vs unable to resolve.
From there, the upgrade continued, and the setting in the hosts file was removed after that check had passed.
It's key to remember to remove the update in the hosts file before the primary node is started.
No changes are required to move forward with the upgrade of the secondary node or nodes.