-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance for build setup #1489
Comments
to workaround this we create symlinks cc @minz1027 |
it's doing symlink here: https://github.com/screwdriver-cd/hyperctl-image/blob/master/scripts/setup.sh#L26 |
I see. In a VM, they are on the same volume and we can use symlink. |
are you talking about the init container part?: https://github.com/screwdriver-cd/executor-k8s/blob/master/config/pod.yaml.tim#L37 |
update 04/10For executor-k8s, a read-only PVC with latest launcher dependencies may be the solution. For executor-k8s-vm, this will be easier, currently for each vm launcher pod, it's putting the dependencies at And for the vm pod, we switch to mount |
04/16We got some interesting findings after this change.
k8s logs:
|
Best scenarioConfig:ssd, launcher content cache, launcher image cached, hyperctl image cached, build image cached Time Breakdown:Total 15 secs
|
07/01Performance with kata and executor-k8s on SSD machine. The bottle neck here is the copy in the init container. This is taking quite a long time (~16 secs). To speed it up, we can either make the emptyDir use memory or use the same technique we have for k8s-vm to mount it from the base host. Give init container permission to write to the mount, and main container read-only permission. Total 25 secs
|
07/03Reopen the issue to work on improving the setup time for executor-k8s.
POCDid a proof concept and the build setup time reduced to ~10s.
To really make it work, need to do the symlink logic to link the readonly hab pkgs to LimitationWith this method, the cache will live on the host, as time goes by, need to have some cronjob to clean up the old dependencies. @catto What do you think about this approach? Let me know if you have any other ideas :D |
@minz1027 Sounds good to me. It would be better that we provide both method new method and current one so that users who cannot use privileged container can continue using SD.cd. I'm curious about taking long time to copy launcher binaries in kata container. Are you using kata-container 1.7 and virtio-fs? The latest version supports virtio-fs with nemu and you can specify nemu profile to use virtio-fs which is much faster than previous one (9pfs). ref: #818 |
@catto that's a nice suggestion! But unfortunately for rhel, the latest version is 1.5... sadness. Once they provide 1.7, for sure we can try it out. But as long as we do copy, more or less it will take some time. Let me discuss this solution with jithin to see if we want to implement it now. |
@minz1027 You can try the latest version with kata-deploy for various distros! have you tried it? |
This CSI plugin should make setup faster though it's in alpha stage. initContainer that copies files from launcher container to build container could be replaced with volumes directly created from launcher image using this plugin. |
I've tested ModifyNote:
ResultBefore
FYI: Launching build pods in my production environment take 30+ seconds even though its node has higher performance cpu and disk than test environment. I guess it's because of high disk IO caused by initContainer.
After
Also confirmed that build can invoke launch binary and user can write launcher volume such as /hab
|
We found a new improvement point in the build setup for k8s(-vm) executors. I created some images based on launcher and measured how long time it takes just to echo.
As above we found that it takes around 3.5s additional time only with These docker volumes are needed for the docker executor, but k8s(-vm) executors never use this volume because these executors have extra volumes for kubernetes. So we created a custom launcher image whose docker volumes are removed with docker-copyedit which can edit an image metadata like And we confirmed this change improves the build setup time. In our environment, the average queued time including pulling images can be below 30s for now. The queued time was 40~60s in daily average and have never been below 30s before this chage has been deployed. |
We can make habitat configurable and provide a flag to turn it off SD cluster wide. This flag SD_HABITAT_ENABLED, when |
What happened:
Current launcher image has many habitat packages in it.
These files are copied to a temporary volume for each build and it takes for a while depending on an environment. For my environment with NVMe SSD, it takes over 10 seconds to complete.
What you expected to happen:
It takes just a few seconds to complete a setup for each build.
I tried to find Kubernetes config like data volume container available on Docker which attaches container storage directly to another container, but I haven't found it yet.
The text was updated successfully, but these errors were encountered: