Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSI-driver does not work with new hetzner firewall feature #204

Closed
thbiela opened this issue May 25, 2021 · 12 comments
Closed

CSI-driver does not work with new hetzner firewall feature #204

thbiela opened this issue May 25, 2021 · 12 comments

Comments

@thbiela
Copy link

thbiela commented May 25, 2021

See issue #139

The controller container shows this log entry multiple times:

Still connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock

I have allowed all outgoing traffic. Incoming traffic is allowed on ports 80 and 443. If I detach the firewall, everything works fine.

Any help is appreciated.

@LKaemmerling
Copy link
Member

LKaemmerling commented May 26, 2021

Hey @thbiela,

please make sure to allow any traffic from/to 169.254.169.254 (Metadata Service) and 213.239.246.1 (Cloud API).

@thbiela
Copy link
Author

thbiela commented May 26, 2021

Hi @LKaemmerling ,

thanks for your reply. Unfortunately, it does not work. This is my configuration:

image

If I add "Any IPv4" and "Any IPv6" to the incoming rules, it works, but this makes the firewall useless, of course.

Any more tips?

@jekakm
Copy link

jekakm commented May 27, 2021

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
I think you should also open NodePorts

currently I'm using this variant. For masters nodes:
fw-masters

and for workers:
fw-workers

@ekeih
Copy link

ekeih commented May 27, 2021

@thbiela Is it possible that your Kubernetes nodes use their public IPs (and interfaces) instead of a private network for communication between the nodes?

  • If you want to use the public IPs you need to allow those in the firewall, so your nodes can communicate.
  • If you want to use the private IPs (which is more likely) you need to configure your nodes/CNI to use the internal IPs/interfaces. In this case you wouldn't need any special firewall rules because the firewall does not block traffic from the private networks: https://docs.hetzner.com/cloud/firewalls/faq/#can-firewalls-secure-traffic-to-my-private-hetzner-cloud-networks. (You would still need firewall rules if you want to expose the Kubernetes API, etc. to the public, but not for the node communication.)

@jekakm From your screenshots your situation looks similar. Usually, it is not desirable to expose the node ports to the public, but only to the internal network between the nodes. I suspect that your nodes also use the public IPs/interfaces instead of the private ones.

@thbiela
Copy link
Author

thbiela commented May 27, 2021

@jekakm @ekeih I do not want to expose the kubernetes API as I am using a private network. Everything works fine with enabled firewall, except the csi controller. I found out that it must be an UDP port that need to be allowed. If I allow any incoming UDP connection it works. I tried to find out which port exactly it is using netstat, but I do not see any waiting or established UDP connection.

@thbiela
Copy link
Author

thbiela commented May 27, 2021

Solved: Incoming UDP Port 8472 must be allowed! :-)

@jekakm
Copy link

jekakm commented May 27, 2021

@thbiela it's flannel port, so looks like CNI use public IPs as @ekeih says.
In my setup I use -iface-regex=10\.0\.*\.* in flannel daemonset

@ekeih
Copy link

ekeih commented May 27, 2021

@thbiela 8472 is the VXLAN port of flannel, this port should definitely not be exposed to the internet. If I remember correctly this allows access to your cluster network by anyone.
The solution proposed by @jekakm is the correct approach to force flannel to use the private network.

@thbiela
Copy link
Author

thbiela commented May 27, 2021

Thanks for this info! I use Rancher for cluster provision and I thought that by specifying the private ips during setup the nodes automatically choose the private network for inter-cluster communication. I have switched the interface by editing a config map and restarting the canal/flannel pods.

More information can be found here: https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#canal-network-plug-in-options

@mysticaltech
Copy link

Hey @thbiela,

please make sure to allow any traffic from/to 169.254.169.254 (Metadata Service) and 213.239.246.1 (Cloud API).

@LKaemmerling How do we adapt to this announcement, since the Hetzner API service IP will change on March 7 (tomorrow), this will cause disruption for running clusters.

Or is IN traffic not needed for the API?

@apricote
Copy link
Member

apricote commented Mar 7, 2023

Hey @mysticaltech,

there is no traffic coming from the Hetzner Cloud API, so this does not affect any INBOUND rules. It does affect any OUTBOUND rules you may have configured.

@mysticaltech
Copy link

Ok, thanks for the confirmation! 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants