Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

Commit

Permalink
network: Remove CNI docs
Browse files Browse the repository at this point in the history
We need to update the network docs to reflect CNM
and CNI are handled much the same way. Start off
by removing the incorrect CNI docs first.

Fixes #678

Signed-off-by: Archana Shinde <[email protected]>
  • Loading branch information
amshinde committed Aug 30, 2018
1 parent f70d6d2 commit 9a8b45f
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 18 deletions.
18 changes: 0 additions & 18 deletions virtcontainers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ Table of Contents
* [Container API](#container-api)
* [Networking](#networking)
* [CNM](#cnm)
* [CNI](#cni)
* [Storage](#storage)
* [How to check if container uses devicemapper block device as its rootfs](#how-to-check-if-container-uses-devicemapper-block-device-as-its-rootfs)
* [Devices](#devices)
Expand Down Expand Up @@ -218,23 +217,6 @@ There are three drawbacks about using CNM instead of CNI:
* Implicit way to designate the network namespace: Instead of explicitely giving the netns to dockerd, we give it the PID of our runtime so that it can find the netns from this PID. This means we have to make sure being in the right netns while calling the hook, otherwise the veth pair will be created with the wrong netns.
* No results are back from the hook: We have to scan the network interfaces to discover which one has been created inside the netns. This introduces more latency in the code because it forces us to scan the network in the CreateSandbox path, which is critical for starting the VM as quick as possible.


## CNI

![CNI Diagram](documentation/network/CNI_diagram.png)

__Runtime network setup with CNI__

1. Create the network namespace ([code](https://github.com/containers/virtcontainers/blob/0.5.0/cni.go#L64-L76))

2. Get CNI plugin information ([code](https://github.com/containers/virtcontainers/blob/0.5.0/cni.go#L29-L32))

3. Start the plugin (providing previously created netns) to add a network described into /etc/cni/net.d/ directory. At that time, the CNI plugin will create the cni0 network interface and a veth pair between the host and the created netns. It links cni0 to the veth pair before to exit. ([code](https://github.com/containers/virtcontainers/blob/0.5.0/cni.go#L34-L45))

4. Create bridge, TAP, and link all together with network interface previously created ([code](https://github.com/containers/virtcontainers/blob/0.5.0/network.go#L123-L205))

5. Start VM inside the netns and start the container ([code](https://github.com/containers/virtcontainers/blob/0.5.0/api.go#L66-L70))

# Storage

Container workloads are shared with the virtualized environment through 9pfs.
Expand Down
Binary file removed virtcontainers/documentation/network/CNI_diagram.png
Binary file not shown.

0 comments on commit 9a8b45f

Please sign in to comment.