-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI-driver does not work with new hetzner firewall feature #204
Comments
Hey @thbiela, please make sure to allow any traffic from/to 169.254.169.254 (Metadata Service) and 213.239.246.1 (Cloud API). |
Hi @LKaemmerling , thanks for your reply. Unfortunately, it does not work. This is my configuration: If I add "Any IPv4" and "Any IPv6" to the incoming rules, it works, but this makes the firewall useless, of course. Any more tips? |
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports |
@thbiela Is it possible that your Kubernetes nodes use their public IPs (and interfaces) instead of a private network for communication between the nodes?
@jekakm From your screenshots your situation looks similar. Usually, it is not desirable to expose the node ports to the public, but only to the internal network between the nodes. I suspect that your nodes also use the public IPs/interfaces instead of the private ones. |
@jekakm @ekeih I do not want to expose the kubernetes API as I am using a private network. Everything works fine with enabled firewall, except the csi controller. I found out that it must be an UDP port that need to be allowed. If I allow any incoming UDP connection it works. I tried to find out which port exactly it is using netstat, but I do not see any waiting or established UDP connection. |
Solved: Incoming UDP Port 8472 must be allowed! :-) |
Thanks for this info! I use Rancher for cluster provision and I thought that by specifying the private ips during setup the nodes automatically choose the private network for inter-cluster communication. I have switched the interface by editing a config map and restarting the canal/flannel pods. More information can be found here: https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#canal-network-plug-in-options |
@LKaemmerling How do we adapt to this announcement, since the Hetzner API service IP will change on March 7 (tomorrow), this will cause disruption for running clusters. Or is IN traffic not needed for the API? |
Hey @mysticaltech, there is no traffic coming from the Hetzner Cloud API, so this does not affect any INBOUND rules. It does affect any OUTBOUND rules you may have configured. |
Ok, thanks for the confirmation! 🙏 |
See issue #139
The controller container shows this log entry multiple times:
Still connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock
I have allowed all outgoing traffic. Incoming traffic is allowed on ports 80 and 443. If I detach the firewall, everything works fine.
Any help is appreciated.
The text was updated successfully, but these errors were encountered: