Where does Kubernetes download image to? - kubernetes

I've read through this page and I'm interested in where Kubernetes downloads an image to and how long it stores it for.
For example, let's say we have a large 3GB image. When i start up a pod will the image be downloaded to disk of the node the pod is being deployed to, and remain until that node is destroyed? If so does that mean i could allocate only 400MB of memory to a pod that is using a 3GB image?

As mentioned in comments correctly ,CRI does this rather than K8s, Assuming you are running Docker
If you want to access the image data directly, it’s usually stored in the following locations:
Linux: /var/lib/docker/
Windows: C:ProgramDataDockerDesktop
macOS: ~/Library/Containers/com.docker.docker/Data/vms/0/

If you are using containerd as runtime then the images are stored at /var/lib/containerd
it is configured in /etc/containerd/config.toml as shown below
contianerd runtime is responsible for downoading and running container using the image.

Related

Is there a way to calculate the total disk space used by each pod on nodes?

context
Our current context is the following: researchers are running HPC calculations on our Kubernetes cluster. Unfortunately, some pods cannot get scheduled because the container engine (here Docker) is not able to pull the images because the node is running out of disk space.
hypotheses
images too big
The first hypothesis is that the images are too big. This probably the case because we know that some images are bigger than 7 GB.
datasets being decompressed locally
Our second hypothesis is that some people are downloading their datasets locally (e.g. curl ...) and inflate them locally. This would generate the behavior we are observing.
Envisioned solution
I believe that this problem is a good case for a daemon set that would have access to the node's file system. Typically, this pod would calculate the total disk space used by all the pods on the node and would expose them as a Prometheus metric. From there is would be easy to set alert rules in place to check which pods have grown a lot over a short period of time.
How to calculate the total disk space used by a pod?
The question then becomes: is there a way to calculate the total disk space used by a pod?
Does anyone have any experience with this?
Kubernetes does not track overall storage available. It only knows things about emptyDir volumes and the filesystem backing those.
For calculating total disk space you can use below command
kubectl describe nodes
From the above output of the command you can grep ephemeral-storage which is the virtual disk size; this partition is also shared and consumed by Pods via emptyDir volumes, image layers,container logs and container writable layers.
Check where the process is still running and holding file descriptors and/or perhaps some space (You may have other processes and other file descriptors too not being released). Check Is that kubelet.
You can verify by running $ ps -Af | grep xxxx
With Prometheus you can calculate with the below formula
sum(node_filesystem_size_bytes)
Please go through Get total and free disk space using Prometheus for more information.

the value of container_fs_writes_bytes_total isn't correct in k8s?

If the application in the container writes or reads some files without using volume, the value of container_fs_writes_bytes_total metrics is always 0 when I upload one huge file and the application save it to one folder. And container_fs_reads_bytes_total is also always 0 when I download the uploaded file. Is there a way to get the disk IO which caused by the container.
Btw, I want to know the disk usage of each container in k8s. There is container_fs_usage_bytes metrics in cAdvisor. But the value isn't correct. If there are many container, the metrics of each container are totally same. The metrics value is the whole usage of the disk in machine. There is someone said use kubelet_vlume_stats_used_bytes, it also has the same problem.

Writing to neo4j pod takes much more time than writing to local neo4j

I have a python code where I process some data, write neo4j queries and then commit these queries to neo4j. When I run the code on my local machine and write the output to local neo4j it doesn't take more than 15 minutes. However, when I run my code locally and write the output to noe4j pod in k8s pod it takes double the time, and when I build my code and deploy it to k8s and run that pod and write the output to neo4j pod it takes a round 3 hours. since I'm new to k8s deployment it might something in the pod configurations or settings, so I appreciate if I can get some hints
There could be few reasons of that.
I would first check how much resources does your pod consume while you are processing data, you can do that using kubectl top pod.
Second I would check if there are any limits inside pod. You can read a great deal about them on Managing Compute Resources for Containers.
If you have a limit set then it might be too low and that's causing the extended time while processing data.
If limits are not set then it might be because of how you installed minik8s. I think as default it's being installed with 4G is memory, you can look at alternative methods of installing minik8s. With multipass you can specify more memory to allocate.
There also can be a issue with Page Cache Sizing, Heap Sizing or number of open files. Please read the Neo4j Performance Tuning.

Openmaptiles-server on Docker - Config

I'm trying to customise OpenMaptiles-Server running under Docker. I've NO docker exposure. I've read the docs and they suggest there is a config file for Docker, but what its name is, where it lives seems to be assumed knowledge.
Is there a blog that explains this for absolute novices as the documentation from Klokantech is not very helpful if you have not used these technologies before.
Can somebody let me know where the configuration file lives, what its name is supposed to be and how I get rid of the error about unconfigured /data directory? thanks
Assuming you mean the docker image at klokantech/openmaptiles-server, the method for configuration is as follows:
Pull the image.
Run the image - expose a local port you can connect to, and bind-mount a local path to the image at /data
Run a browser and connect to the container host at the specified port. A 'first time install' interface will come up allowing you select what regions you want to download mbtile data for. Note that you can only run 1 mbtile per server - it will render 'blank' (tan) tiles for everything outside that region. Allow the container server to download the tiles.
Inspect the local path that was mounted to the container at /data. You should now see a .mbtiles and a config.json file. Save those off somewhere for posterity.
In the actual location you intend to now run the container, deploy the image, the mbtiles, and the config.json. Put the mbtiles and config.json in the same local directory, and bind mount those to /data when you run the openmap-tileserver. It will pick up on the existing mbtiles and config, skipping the install step, and boot straight into serving the tiles.

AWS EB should create new instance once my docker reached its maximum memory limit

I have deployed my dockerized micro services in AWS server using Elastic Beanstalk which is written using Akka-HTTP(https://github.com/theiterators/akka-http-microservice) and Scala.
I have allocated 512mb memory size for each docker and performance problems. I have noticed that the CPU usage increased when server getting more number of requests(like 20%, 23%, 45%...) & depends on load, then it automatically came down to the normal state (0.88%). But Memory usage keeps on increasing for every request and it failed to release unused memory even after CPU usage came to the normal stage and it reached 100% and docker killed by itself and restarted again.
I have also enabled auto scaling feature in EB to handle a huge number of requests. So it created another duplicate instance only after CPU usage of the running instance is reached its maximum.
How can I setup auto-scaling to create another instance once memory usage is reached its maximum limit(i.e 500mb out of 512mb)?
Please provide us a solution/way to resolve these problems as soon as possible as it is a very critical problem for us?
CloudWatch doesn't natively report memory statistics. But there are some scripts that Amazon provides (usually just referred to as the "CloudWatch Monitoring Scripts for Linux) that will get the statistics into CloudWatch so you can use those metrics to build a scaling policy.
The Elastic Beanstalk documentation provides some information on installing the scripts on the Linux platform at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-cw.html.
However, this will come with another caveat in that you cannot use the native Docker deployment JSON as it won't pick up the .ebextensions folder (see Where to put ebextensions config in AWS Elastic Beanstalk Docker deploy with dockerrun source bundle?). The solution here would be to create a zip of your application that includes the JSON file and .ebextensions folder and use that as the deployment artifact.
There is also one thing I am unclear on and that is if these metrics will be available to choose from under the Configuration -> Scaling section of the application. You may need to create another .ebextensions config file to set the custom metric such as:
option_settings:
aws:elasticbeanstalk:customoption:
BreachDuration: 3
LowerBreachScaleIncrement: -1
MeasureName: MemoryUtilization
Period: 60
Statistic: Average
Threshold: 90
UpperBreachScaleIncrement: 2
Now, even if this works, if the application will not lower its memory usage after scaling and load goes down then the scaling policy would just continue to trigger and reach max instances eventually.
I'd first see if you can get some garbage collection statistics for the JVM and maybe tune the JVM to do garbage collection more often to help bring memory down faster after application load goes down.