Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

The virtcontainers network interface is not needed #1179

Closed
sameo opened this issue Jan 25, 2019 · 6 comments · Fixed by #3101
Closed

The virtcontainers network interface is not needed #1179

sameo opened this issue Jan 25, 2019 · 6 comments · Fixed by #3101
Assignees

Comments

@sameo
Copy link

sameo commented Jan 25, 2019

Description of problem

The following network interface:

type network interface {
	// run runs a callback function in a specified network namespace.
	run(networkNSPath string, cb func() error) error

	// add adds all needed interfaces inside the network namespace.
	add(sandbox *Sandbox, hotplug bool) error

	// remove unbridges and deletes TAP interfaces. It also removes virtual network
	// interfaces and deletes the network namespace.
	remove(sandbox *Sandbox, hotunplug bool) error
}

is only implemented by the default network implementation and the noop one. The noop one is only there for testing purposes and at the same time there is no known need for another network model implementation.

Expected result

Remove the network interface and only use what's currently the default network interface implementation.

@sameo sameo self-assigned this Jan 25, 2019
@sameo
Copy link
Author

sameo commented Jan 25, 2019

sameo pushed a commit to sameo/runtime-1 that referenced this issue Jan 25, 2019
There's only one real implementer of the network interface and no real
need to implement anything else. We can just go ahead and remove this
abstraction.

Fixes: kata-containers#1179

Signed-off-by: Samuel Ortiz <[email protected]>
sameo pushed a commit to sameo/runtime-1 that referenced this issue Jan 25, 2019
There's only one real implementer of the network interface and no real
need to implement anything else. We can just go ahead and remove this
abstraction.

Fixes: kata-containers#1179

Signed-off-by: Samuel Ortiz <[email protected]>
@sboeuf
Copy link

sboeuf commented Jan 25, 2019

@sameo I'm not against the removal of the Go interface since, as you mentioned it, we're only using one single implementation. But I'm concerned about the unit tests, because the noop implementation is pretty convenient to test a bunch of code that would be complex to test if we had to mimic the real network behavior. WDYT?

@mcastelino
Copy link
Contributor

@sameo I'm not against the removal of the Go interface since, as you mentioned it, we're only using one single implementation. But I'm concerned about the unit tests, because the noop implementation is pretty convenient to test a bunch of code that would be complex to test if we had to mimic the real network behavior. WDYT?

@sboeuf could you not test it with the equivalent of net==none? With the noop you were actually not exercising any real code paths right. It was just there to satisfy the interface?

@sboeuf
Copy link

sboeuf commented Jan 25, 2019

@mcastelino

@sboeuf could you not test it with the equivalent of net==none?

That's a good idea, and I didn't think about this as it's been introduced fairly recently. This might be enough for some cases.

With the noop you were actually not exercising any real code paths right. It was just there to satisfy the interface?

Yes, exactly.

sameo pushed a commit to sameo/runtime-1 that referenced this issue Jan 28, 2019
There's only one real implementer of the network interface and no real
need to implement anything else. We can just go ahead and remove this
abstraction.

Fixes: kata-containers#1179

Signed-off-by: Samuel Ortiz <[email protected]>
@egernst
Copy link
Member

egernst commented Jan 30, 2019

Great! I think we should remove, no reason to have an interface in case, and we can fix the tests.

What will this mean with respect to semantic versioning?

@sameo
Copy link
Author

sameo commented Jan 30, 2019

What will this mean with respect to semantic versioning?

This is purely an internal interface, there is no reason to bump version for this kind of change.

egernst added a commit to egernst/runtime that referenced this issue Dec 9, 2020
On pod delete, we were looking to read files that we had just deleted. In particular,
stopSandbox for QEMU was called (we cleanup up vmpath), and then QEMU's
save function was called, which immediately checks for the PID file.

Let's only update the persist store for QEMU if QEMU is actually
running. This'll avoid Error messages being displayed when we are
stopping and deleting a sandbox:

```
level=error msg="Could not read qemu pid file"
```

I reviewed CLH, and it looks like it is already taking appropriate
action, so no changes needed.

Ideally we won't spend much time saving state to persist.json unless
there's an actual error during stop/delete/shutdown path, as the persist will
also be removed after the pod is removed. We may want to optimize this,
as currently we are doing a persist store when deleting each container
(after the sandbox is stopped, VM is killed), and when we stop the sandbox.
This'll require more rework... tracked in:
  kata-containers/kata-containers#1181

Fixes: kata-containers#1179

Signed-off-by: Eric Ernst <[email protected]>
egernst added a commit to egernst/runtime that referenced this issue Dec 10, 2020
On pod delete, we were looking to read files that we had just deleted. In particular,
stopSandbox for QEMU was called (we cleanup up vmpath), and then QEMU's
save function was called, which immediately checks for the PID file.

Let's only update the persist store for QEMU if QEMU is actually
running. This'll avoid Error messages being displayed when we are
stopping and deleting a sandbox:

```
level=error msg="Could not read qemu pid file"
```

I reviewed CLH, and it looks like it is already taking appropriate
action, so no changes needed.

Ideally we won't spend much time saving state to persist.json unless
there's an actual error during stop/delete/shutdown path, as the persist will
also be removed after the pod is removed. We may want to optimize this,
as currently we are doing a persist store when deleting each container
(after the sandbox is stopped, VM is killed), and when we stop the sandbox.
This'll require more rework... tracked in:
  kata-containers/kata-containers#1181

Fixes: kata-containers#1179

Signed-off-by: Eric Ernst <[email protected]>
egernst added a commit to egernst/runtime that referenced this issue Jan 13, 2021
On pod delete, we were looking to read files that we had just deleted. In particular,
stopSandbox for QEMU was called (we cleanup up vmpath), and then QEMU's
save function was called, which immediately checks for the PID file.

Let's only update the persist store for QEMU if QEMU is actually
running. This'll avoid Error messages being displayed when we are
stopping and deleting a sandbox:

```
level=error msg="Could not read qemu pid file"
```

I reviewed CLH, and it looks like it is already taking appropriate
action, so no changes needed.

Ideally we won't spend much time saving state to persist.json unless
there's an actual error during stop/delete/shutdown path, as the persist will
also be removed after the pod is removed. We may want to optimize this,
as currently we are doing a persist store when deleting each container
(after the sandbox is stopped, VM is killed), and when we stop the sandbox.
This'll require more rework... tracked in:
  kata-containers/kata-containers#1181

Fixes: kata-containers#1179

Signed-off-by: Eric Ernst <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants