This repository has been archived by the owner on May 12, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 373
We should allow default memory for VM to be < 256 MB #2987
Labels
bug
Incorrect behaviour
Comments
egernst
added a commit
to egernst/runtime
that referenced
this issue
Sep 24, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's lower from 256 to 8. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]>
egernst
added a commit
to egernst/runtime
that referenced
this issue
Oct 12, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]> test Signed-off-by: Eric Ernst <[email protected]>
egernst
added a commit
to egernst/runtime
that referenced
this issue
Oct 12, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]>
chavafg
pushed a commit
to chavafg/runtime-1
that referenced
this issue
Oct 16, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]> (cherry picked from commit ab7f18d)
jcvenegas
pushed a commit
to jcvenegas/runtime
that referenced
this issue
Oct 19, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]>
jcvenegas
pushed a commit
to jcvenegas/runtime
that referenced
this issue
Oct 19, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]>
jcvenegas
pushed a commit
to jcvenegas/runtime
that referenced
this issue
Oct 20, 2020
Currently, we enforce a lower limit of 256MB for the defaultMemorySize. In general case with QEMU, this isn't a major problem, since we'll just eat the extra page table overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available. However, this becomes very problematic when you use preallocated memory. In the k8s case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead (on the order of 140 MB). We should allow the administrator of kata to set a better default value, which should be much closer aligned with what's used for PodOverhead (in the kube case). Let's remove the artifical limit in kata, and leave it up to the end user to pick an appropriate non-default value, if desired. Fixes: kata-containers#2987 Signed-off-by: Eric Ernst <[email protected]>
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Description of problem
In toml, set the default memory size to 128 MB, and start a VM
Expected result
VM should be default memory size + memory requested by the container workload(s).
Actual result
It'll be 2048 + memory requested
Further information
In general case with QEMU, this isn't a major problem, since we'll just eat the extra page tables overhead, and only consume pages when needed. The memcgroup in the guest should make sure it only utilizes requested amount, not what is actually available.
However, this becomes very problematic when you use preallocated memory (which is only supported by QEMU today?). In this case, the VMM will get OOM killed very quickly since the host's memory cgroup (created by kubelet) will limit the entire sandbox to the requests + pod overhead, which is on the order of 160 MB.
I think it would be safest to let the user decide the defaultMemory, and not enforce a minimum. I expect many deployments will be made with a default memory request for unspecified container, which should make this all feasible.
The text was updated successfully, but these errors were encountered: