-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define Linux Network Devices #1271
base: main
Are you sure you want to change the base?
Conversation
/assign @samuelkarp |
https://github.com/opencontainers/runtime-spec/blob/main/features.md should be updated too |
51e5104
to
3a666eb
Compare
updated and addressed the comments |
AI @aojea (document the cleanup and destroy of the network interfaces) |
From the in-person discussion today:
|
config-linux.md
Outdated
|
||
This schema focuses solely on moving existing network devices identified by name into the container namespace. It does not cover the complexities of network device creation or network configuration, such as IP address assignment, routing, and DNS setup. | ||
|
||
**`netDevices`** (object, OPTIONAL) set of network devices that MUST be available in the container. The runtime is responsible for providing these devices; the underlying mechanism is implementation-defined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This spec said "MUST" but, I think it can't do it in the rootless container because the rootless container doesn't have CAP_NET_ADMIN, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we should take care of the rootless container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be an error in the case of a rootless container, if the runtime is not able to satisfy the MUST condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be an error in the case of a rootless container, if the runtime is not able to satisfy the MUST condition.
+1 but It'd be better to clarify it in the spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added mor explanations about runtime and network devices lifecycle and runtime checks, PTAL
Pushed a new commit addressing those comments, the changelog is
|
@aojea can you squash the commits? Otherwise LGTM, and this is definitely something we can build upon. |
ebe9192
to
0f9ee33
Compare
@kolyshkin squashed, thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for one nit.
0f9ee33
to
0ce07ae
Compare
The problem in my mind is where do you draw the line? At what point does "a few netlink api calls" become "a few too many" and now it's no longer acceptable? I agree with Kir that we should keep the scope really limited here so we don't accidentally stray too far into network management in the low level runtime, which is something we've traditionally left as a higher level concern for a lot of good reasons. |
One more comment -- why is the data structure a map? A map makes sense when we want to quickly find an element based on a key. Is it needed here? Perhaps a simple array is easier. |
interfaces names are unique per namespace, a map guarantees this avoiding user fat finger problems or misconfiguration, otherwise we need to validate that the names in the array are unique |
I don't think the uniqueness requirement justifies using map, and I don't think we should be concerned much about uniqueness. In case a network device will be listed twice, the second entry will fail with "no such device" because the first one will already be moved. Also, we have other similar fields (Devices, Namespaces, Mounts etc.) as arrays. I see that time namespace offsets is implemented via maps (and I think it was a mistake, for similar reasons -- an array would work just fine). Having said that, this is not a deal breaker. Also, I'm sure other maintainers may have different opinions. WDYT @opencontainers/runtime-spec-maintainers ? |
With the IP configuration part now removed, am I correct that the "easy" solution configuring IP networking for additional interfaces is still via a custom runtime hook? In our case the containers are static and cannot be amended with IP configuration tools or the necessary networking capabilities, and I doubt we are able to inject another container into the pod without causing a stir. IP (and yes, a route configuration) would have been a good and standard solution for simple secondary network needs. |
My "border line" is about features and fields provided by kernel netlink api. If something is done with it, I don't see big difference between call "move interface xyz to namespace abc" vs. "set property foo to value bar on interface xyz in namespace abc". Or to what it worth, "put value 123 into cgroup foobar". |
"linux": { | ||
"netDevices": { | ||
"eth0": { | ||
"name": 23 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is 23 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an integer to cause a bad config since it is expecting a string
To me, this sounds like a runtime should implement most of what If that's the case, here's a crazy idea. Considering that ip(8) is a de facto standard on Linux, perhaps such configuration can be achieved by supplying a sequence of Something like "linux": {
"netDevices": [
"name": "enp34s0u2u1u2",
"ct_name": "eth0",
"config": [
"ip addr add 10.2.3.4/25 dev $IF",
"ip link set $IF mtu 1350",
"ip route add 10.10.0.0/16 via 10.2.3.99 dev $IF proto static metric 100"
"ip route add default dev $IF"
]
]
} Which will result in runtime running the host's The upsides are:
The biggest downside, I guess, is the scope is not well defined, as |
I don't like the idea of execing commands, specially in golang implementations that are problematic with goroutines and namespaces I still think that preventing to detect failures at runtime is desirable, you can argue you should not add duplicates, but bugs exist and if somebody accidentally add a duplicate interface name it will cause a problem in production we could have avoided just not allowing to define it |
I believe that this is really bad idea, compared to have limited set of config options implemented. To provide more insights on our use case, here is snip of the code that we want to get rid of, currently implemented as OCI runtime hook (btw, including already unreliable call to |
Let me move the milestone from vNext (v1.2.1?) to vNextNext (v1.3.0?) |
Do you have an aprox estimation on how long can take 1.3.0? I have some dependencies on this feature and it will be nice to be able to account for that time |
@AkihiroSuda It looks like 1.2.1 got tagged yesterday: https://github.com/opencontainers/runtime-spec/releases/tag/v1.2.1. Is there anything blocking the merge of this PR into |
I still don't think it's appropriate to expect the runtime to set up the network interfaces and believe we're opening a can of worms here, but not strongly enough to block it outright (especially with Kir approving it, and thus the implied maintenance of it in runc). ❤️ |
I don't see risk meanwhile we stick to network interfaces, the moment we leak networks , we start to create runtimes dependencies and I fully agree with you. That is also the reason why the last proposal removed entirely the IP initialization bits #1271 (comment), I have some ideas on how to solve this problem without modifying the spec, but my priority now is to solve the existing problem in the ecosystem. The problem this solves is that today there is hardware that needs to use network interfaces, mainly GPUs (but there are other cases). Since there is no way to declaratively move these interfaces to the container, everyone solves the problem from a different angle, by hooking into the Pod/Container network namespace creation using:
All these solutions are brittle , requires high privileges that exposes security problems, and are hard to troubleshoot since run in different places during the container creation and cause fragmentation in the ecosystem and bad user experience. The goal here is that developers can just patch the OCI spec to say "move eth2 to this container with /dev/gpu0" simple to do with CDI and NRI and no need for extra privileges ... I personally don't feel that meanwhile we stick to this definition, we will open that pandora box that is the network (that non of us want) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just taking a look, I tried to go through all the 137 comments. But sorry if I'm missing something.
I guess most concerns are solved now that there is no reference to the IP address and friends? Is there any unaddressed concern still?
The idea LGTM. I've also checked quickly the runc implementation, it seemed cleaned and nothing that catched my attention (like no weird magic to do with IPs or anything, that is just not touched at all).
@aojea I ignore this, but how does this works in practice? Is it expected that the host network interface will be configured by the host (i.e. IP address, mtu, etc.) and then moved into the container? All of the configuration and all "just works" when moved into the containe? Or CNI or some other entity will need to do something?
|
||
This schema focuses solely on moving existing network devices identified by name from the host network namespace into the container network namespace. It does not cover the complexities of network device creation or network configuration, such as IP address assignment, routing, and DNS setup. | ||
|
||
**`netDevices`** (object, OPTIONAL) set of network devices that MUST be made available in the container. The runtime is responsible for providing these devices; the underlying mechanism is implementation-defined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here is array of objects, and each object has a string field with name name
.
You want an array at some point, for the net devices, but here you say object and later you say string. I think here you want to say array of object, like here: https://github.com/opencontainers/runtime-spec/pull/1271/files#diff-048d23d864e15683f516d2c1768965d546e87f8a59b2606cf2f2d52500ba5a32R127
OHH, you want two names, the host network interface and the name to assign in the container, right? Maybe you want array of objects and each object has two fields, the name of the interface of the host and the name to assign to the container interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, you want an array and none of the types you list (not the free-text, but the name of the field with the type in parenthesis) is an array. I think some array is missing :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you see the previous proposal, there were multiple attributes before , leaving this as an object allows to extend the API in a backwards compatible way
"linux": {
"netDevices": {
"eth0": {
"name": "container_eth0",
"new_field_here": "asdasd",
},
"ens4": {},
"ens5": {}
}
}
I still think the object to represent a dictionary is better than an array because avoids duplicate names on the configuration that can cause runtime errors #1271 (comment)
The name of the network device is the entry key. | ||
Entry values are objects with the following properties: | ||
|
||
* **`name`** *(string, OPTIONAL)* - the name of the network device inside the container namespace. If not specified, the host name is used. The network device name is unique per network namespace, if an existing network device with the same name exists that rename operation will fail. The runtime MAY check that the name is unique before the rename operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, as I'm not very familiar with NRI and I don't know if this concern makes sense, please let me know. How can NRI plugins using this decide on container interface name to use? I mean choose one that won't clash with the ones set by potentially other plugins? Can they see what has been done so far by previous plugins? Or this is not an issue at all (in that case, can you explain briefly why? I'm curious :))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In kubernetes both main runtimes, containerd and crio, the name of the interface inside the container is always eth0
, so for 95% of the cases in kubernetes the problem is easy to solve.
There are cases where people add additional interfaces with out of band mechanisms as in #1271 (comment), in that case, there are several options:
- add a random generated name with enough entropy
- inspect the network namespace and check for duplicates
- fail with a collision name error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, but you can't inspect the netns because it hasn't been created yet. So, how can those tools, befor choosing a name for the interface inside the container, check which names were used by others? E.g if NRI has several plugins and more than one adds a interface, how can they the second plugin know eth1 is added and avoid using that name?
The random generated would be an option, but it will be nice to understand if that is needed or if people can just choose names that avoids collisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In kubernetes the network namespace is created by the runtime and there will be only an eth0
interface,
If there are more interface is because some component is adding them via an out of band process, that will have exactly the same problem. This works today because cluster administrators only set up one component to add additional interfaces.
This reinforces my point in #1271 (comment) , using a well defined specification will help multiple implementations to be able to synchronize, and we need thhis primitive to standardize these behaviors, to build higher level APIs ... we are already doing it for Network Status kubernetes/enhancements#4817 , we need to do it for configuration based on this.
The goal is to decouple the interface lifecycle and configuration from the oci runtime, that is the part that SHOULD NOT be handled by the OCI runtimes or actors of the pod/container creation. I think there are two scenarios:
|
Okay, we were talking over slack and there are two things that we think we still need to answer:
I ask 2. for two reasons: a) be clear that is the concern of another part of the stack; b) understand how this will be used and understand if nothing else is missing here (i.e. if CNI is expected to handle it, make sure it is not running too late, or it has all the info to realize there is an extra interface to configure, if another component is expected to handle, see that it can, etc.) |
Thanks @rata for your help today, I updated the PR to implement it in runc with integration tests that show the behavior
the interface configuration is preserved, so users can set down the interface in the host namespace, configure the interface (ip address, mtu, hw address) and the runtime will move it to the network namespace maintaining that configuration, this removes the need to include network configuration in the runtime and allow for implementations to use the preparation of the device to configure it without risks (kubernetes use case) Users can still decide to use a process inside the container to configure the network configuration, use dhcp or some sort of bootstrap ala cloud-init |
The proposed "netdevices" field provides a declarative way to specify which host network devices should be moved into a container's network namespace. This approach is similar than the existing "devices" field used for block devices but uses a dictionary keyed by the interface name instead. The proposed scheme is based on the existing representation of network device by the `struct net_device` https://docs.kernel.org/networking/netdevices.html. This proposal focuses solely on moving existing network devices into the container namespace. It does not cover the complexities of network configuration or network interface creation, emphasizing the separation of device management and network configuration. Signed-off-by: Antonio Ojea <[email protected]>
rebased and added this last requirement to preserve the network config iff --git a/config-linux.md b/config-linux.md
index 6682e16..1f0e808 100644
--- a/config-linux.md
+++ b/config-linux.md
@@ -201,6 +201,8 @@ This schema focuses solely on moving existing network devices identified by name
The runtime MUST check that is possible to move the network interface to the container namespace and MUST [generate an error](runtime.md#errors) if the check fails.
+The runtime MUST preserve the existing network interface attributes, like MTU, MAC and IP addresses, enabling users to preconfigure the interfaces.
+
The runtime MUST set the network device state to "up" after moving it to the network namespace to allow the container to send and receive network traffic through that device.
|
The proposed "netdevices" field provides a declarative way to specify which host network devices should be moved into a container's network namespace.
This approach is similar than the existing "devices" field used for block devices but uses a dictionary keyed by the interface name instead.
The proposed scheme is based on the existing representation of network device by the
struct net_device
https://docs.kernel.org/networking/netdevices.html.
This proposal focuses solely on moving existing network devices into the container namespace. It does not cover the complexities of network configuration or network interface creation, emphasizing the separation of device management and network configuration.
Fixes: #1239