I have been trying to run a VM on some particular cpu cores. Now the thing is I can set cpuset with virt-install and that will reflect in VM's XML and it will define the affinity. But what about cpuset.cpus? That is inherited directly from the parent. Is there any way to set it when launching the VM? I mean, when I give the command virt-install, is there any option that can set cpuset.cpus dynamically?
virt-install doesn't support pinning particular VCPUs to host's CPUs. For this you have to use:
either GUI tool virt-manager
or edit the domain XML using virsh edit <domain> and set the pinning according to CPU tuning options
Related
The task is to configure Cluster Autoscaler(CA) with 1 minute of scale-down time in DigitalOcean DOKS.
DOKS by default supports CA and having 10 mins of scale down time. Now i need to add below parameters as per my requirement.
Parameters need to modify to:
--scale-down-unneeded-time=10s
--scale-down-delay-after-add=30s
I tried in DO, but there is no place for adding/changing the parameters(or not sure if im missing anything).
Then tried in AWS EKS Cluste, Configured CA(no default support) with the above parameters, working fine.
Could anyone help me to configure these parameters in DOKS?
Found the root after the entier day.
It may be good for someone, so adding.
DigitalOcean K8S Cluster supports Cluster-AutoScaler(CA) by default. But not giviing explicit option to change its configurations.
We cannot change CA behaviour by passing parameters.
There is work in progress:
https://github.com/kubernetes/autoscaler/issues/3556
https://github.com/kubernetes/autoscaler/issues/3556#issuecomment-877015122
I have a Yocto based OS on which I have everything installed to start the network.
Nevertheless, at each boot I need to do systemctl start networking to start it. Initially the service was even masked. I found out how to unmask it but I can't find a way to start it automatically.
I don't know much about systemd but the networking.service is located in generator.late folder. From what I understood, it's generated afterward.
How can I enable it?
It depends if you want to enable the service only on one particular device. If yes, it is simple:
systemctl enable networking
Append the parameter --now if you also want to start the service just now.
If you want to enable the service on all your devices (i.e. it will be automatically enabled in all your images coming from build), the best way is to extend the recipe, but please see below for other ways how to handle the network. The process is describe at NXP support for example.
Some notes about networking.service itself: I assume that your networking.service comes from init-ifupdown recipe. If yes, is there any reason to handle network configuration using old SysV init script in system with systemd? The service is generated from SysV init script by systemd-sysv-generator. So I would suggest to try other networking services like systemd's native "systemd-networkd", "NetworkManager" or "connman". The best choice depends on type of your embedded systemd. These services are integrated with systemd much better.
Some more information on activating or enabling the services: https://unix.stackexchange.com/questions/302261/systemd-unit-activate-vs-enable
I've started using docker swarm mode, and I couldn't find reliable information about a lot of things covered in traditional swarm. Does anyone know about the following things??
What kind of filters are available? Used to have constraint, health, and containerslots, but not sure how to set, change or use that filter when creating services. I got constraint label working by passing "--constraint node.labels.FOO==BAR" to docker service create, but not sure about other filters.
How do you set affinity, dependency, port? passing "-e" doesn't seemed to be working..
Anyway to set strategy...?
Not specific to swarm, but is there any way to check how much CPU or memory is reserved by containers? Couldn't find relevant information in docker info.
This question is also not specific to swarm. Is there any way to limit disk and network bandwidth?
I'm referring this => https://docs.docker.com/swarm/scheduler/filter/ but I can't find one for the swarm mode.
Seriously should be working on improving swarm mode documentation...
Question 1, 2 and 3 can be answered in the following link I believe
https://docs.docker.com/engine/swarm/manage-nodes/
For 4th question:
You can very well do docker inspect on the containers to get cpu and memory reserved. By default docker doesn't assign limit to memory and cpu, it will try to consume what ever is available on the host. If you have set the limits then you can see the same through docker inspect.
I'm currently attempting to develop a sandbox using Docker. Docker spawns process through a running daemon, and I am having a great deal of trouble enabling the limits set forth in the limits.conf file such that they apply to the daemon. Specifically, I am running a forkbomb such that the daemon is the process that spawns all the new processes. The nproc limitation I placed on the user making this call doesn't seemed to get applied and I for the life of me can not figure out how to make it work. I'm quiet positive it will be as simple as adding the correct file to /etc/pam.d/, but I'm not certain.
The PAM limits only apply to processes playing nice with PAM. By default, when you start a shell in a container, it won't have anything to do with PAM, and setting limits through PAM just won't work.
Here are some other ways to make it happen!
Instead of starting your process immediately, you can start a tiny wrapper script, which will do the appropriate ulimit calls before executing your process.
If you want an interactive shell, you can run login -f <username> (e.g. login -f root); that will use the normal login process to auto-log you on the machine (and that should go through the normal PAM mechanisms).
If you want all containers to be subject to those limits, you can set the limits on your system, then restart Docker with those lower limits; containers are created by Docker, and by default, they will inherit those limits as well.
I am working on a setup where I am running a Ubuntu VM on a Fedora 18 host using QEMU/KVM and libvirt. Now I have pinned 2 vCPUs to my VM. I can see the pinned vCPUs using virsh. But, is there any other way to find that out? Ideally, I want to write a function which will return the number/id of pinned vCPUs.
You can use element in domain XML to determine which pinned CPUs for which vcpu. see section CPU Tuning. By default, the vcpu is pinned to all physical CPUs.
For example:
<cputune>
<vcpupin vcpu="0" cpuset="1-4,^2"/>
<vcpupin vcpu="1" cpuset="0,1"/>
</cputune>