How is communication handled when connecting to a remote device through Infrastructure Manager when there is a tunnel involved between my two hubs?
Release: CNMSPP99000-7.6-Unified Infrastructure Mgmt-Server Pack-- On Prem
Below is an example of how communication is handled between two hubs that have a tunnel configured between them and Infrastructure Manager that is on a local workstation.
Suppose you have two hubs in an environment, with tunnels between them.
We'll say "Hub A" is our tunnel server, and "Hub B" is our tunnel client.
Furthermore we have "workstation C" which is a separate Infrastructure Manager (IM) workstation with no hub/robot.
But let's say that Workstation C and "Hub A" are on the same physical network (same side of the tunnel).
Hub A's tunnel server listens for client connections on port 48003, and so Hub B makes an outgoing connection from a random source port (assigned by the OS) to port 48003 to establish this connection.
Hub A also has some probes running on it - let's say it has 4 probes, and in Infrastructure Manager, these probes are running on ports 48004, 48005, 48006, and 48007 on Hub A.
Now, let's suppose that on Workstation C, which is logged into Hub A, someone clicks on a probe on Hub B to try and configure the probe.
The IM client on Workstation C will ask Hub A "I need to communicate with a probe on Hub B, can you tell me how to do this?"
Hub A will open a local port - in this case, it would be 48008 because that's the "next available" open port in the UIM range (since the probes mentioned earlier are holding 48004-48007.) Hub A will say to Workstation C, "connect to me on port 48008 and I will proxy that across the tunnel to the client which is connected on my port 48003."
Then, suppose the same user now wants to configure a different probe, which is maybe on a different robot, but that robot is still underneath Hub B.
Workstation C says "how do I talk to that robot on Hub B?" and once again, Hub A finds the next available local port, which is now 48009, and says "Connect directly to me, Hub A, on port 48009. I will then handle the rest."
And so on and so forth - let's say 10 users configure 10 different probes, and now we have 10 tunnel sessions, where Hub A has clients connecting on ports 48008 through 48018, and all those connections are then being "proxied" across the tunnel connection.
Hopefully this makes sense so far - as you can see, each individual communication session over a tunnel requires a "local port" on one side of the tunnel which gets proxied across to the remote port.
If all 10 of those tunnel sessions complete successfully, the configuration is finished, and the session is closed, then when the 11th user goes to open a new configuration (and thus create a new session), the hub will pick up where it left off - on port 48019, even though ports 48008 through 48018 should now be open again.
Eventually, this will keep increasing until the hub is restarted -- or until you run up against port 65534 and then the hub will see that it can't get an available port and will restart itself.
When working on than issue, we noted that once this "local port" got above a certain range, we couldn't communicate across the tunnel anymore - and iptables seemed to be the culprit.
What this seemed to indicate was, in terms of the above example, "Workstation C is trying to connect to Hub A on port 480xx but iptables isn't allowing this."