Is there a way to specify a tar file of docker image in manifest file for kubernetes? - kubernetes

Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.

In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).

You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.

Related

Deploy containers in pod using docker compose volumes

I was given a docker compose file for superset which included volumes mounted from the repo itself.
docker-compose-non-dev.yml
I have to deploy this as containers in a pod in an EKS cluster. I can't figure out how the volumes should be done because the files are mounted locally from the repo when we run:
docker-compose up
[ EDIT ]
I just built the container with the files I needed inside it.
Docker compose is a tool geared towards local deployments (as you may know) and so it optimizes its workflows with that assumption. One way to work this around is by wrapping the docker image(s) that compose up with the additional files you have on your local environment. For example a wrapper dockerfile would be something like
FROM <original image>
ADD <local files to new image>
The resulting image is what you would run in the cloud on EKS.
Of course there are many other ways to work it around such as using Kubernetes volumes and (pre-)populating them with the local files, or bake the local files in the original image from the get go, etc.
All in all the traditional compose model of thinking (with local file mappings) isn't very "cloud deployments friendly".
You can convert docker-compose.yaml files with a tool called kompose.
It's as easy as running
kompose convert
in a directory containing docker-ccompose.yaml file.
This will create a bunch of files which you can deploy with kubectl apply -f . (or kompose up). You can read more here.
However, even though kompose will generate PersistentVolueClaim manifests, no PersistentVolumes will be created. You have to make those yourself (cluster may try to create PVs by itself, but it's strongly based on PVCs generated by kompose, I would not rely on that).
Docker compose is mainly used for devlopment, testing and single host deployments [reference], which is not exactly what Kubernetes was created for (latter being cloud oriented).

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

What is the right way to provision nodes with static content in Amazon EKS?

I have an application that loads a .conf file and some additional files on startup. Now I want to run this app in Amazon EKS. What is the best way to inject these files into a pod in Kubernetes? I tried copying them into a directory on the node and mounting that directory in the pod via hostpath. That works but doesn't feel the right way to do it. Does EKS have any autoprovision tool for this?
If it's a fixed config file for your app, you can even burn it inside docker image, i.e. copy file in your Dockerfile
If it needs to be configurable during deployment (e.g. it's environment-specific), then indeed, as mentioned by #anmolagrawal above, ConfigMap is the right way:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
If you can modify your app to rely on env vars or command-line arguments, it will make your life a lot simpler, you can just pass those values in the Pod spec, no need for ConfigMap.
But you definitely shouldn't be managing yourself any app-specific content on the Kubernetes nodes.

Kubernetes persistent volume claim on /var/www/html problem

I have a magento deployment on nginx which uses a persistent volume and a persistent volume claim. Everything works fine, but I am struggeling with one problem. I am using an initContainer to install magento via cli (which works fine) but as soon my POD starts and mounts the PVC to the /var/www/html (my webroot) the data previously (in the initContainer) installed data is lost (or better replaced by the new mount). My workaround was to install magento into /tmp/magento (in the initContainer) and as soon the "real" POD is up, the data from /tmp/magento is copied to /var/www/html. As you can imagine this takes a while and is kind of a permission hell, but it works.
Is there any way, that I can install my app directly in my target directory, without "overmapping" my files? I have to use an PV/PVC because I am mounting the POD directory via NFS and also I don't want to loose my files.
Update: The Magento deployment is inside a docker image and is installed during the docker build. So if I install the data into the target location, the kubernetes mount replaces the data with an empty mount. That's the main reason for the workaround. The goal is to have the whole installation inside the image.
If Magento is already installed inside the imaged and located by some path (say /tmp/magento) but you want it to be accessible by path /var/www/html/magento, why don't you just create a symlink pointing to the existing location?
So your Magento will be installed during the image build process and in the entrypoint an additional command
ln -s /tmp/magento /var/www/html/magento
will be run before the Nginx server starts itself. No need for intiContainers.

Is possible edit file on gcp persistent disk?

I have node on google kubernetes engine using persistent volume. Is possible edit files on this volume from gcloud, or google cloud shell? For example edit config and recreate node? Or it is only posiible from running pod using kubectl exec?
i think you can have a look to gsutil command it allows you to interact with your buckets .
Guide to Gsutil
The volume would be a block device, so I’d expect it’d not be possible to edit it outside of the pod it’s attached to. So yes, expecting into the pod would do it, but you could also just use kubectl cp to copy files (and directories!) directly from your local machine onto the volume, mounted to the pod.
Here’s the relevant doc:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp