Manage restart container from sidecar in the same pod - kubernetes

I have a pod with two containers :
A component with input file and does not support hot reload, to handle my new set of files i need to restart it with the new files in a particulary directory.
A sidecar who handle "event" and communicate with the other container
What i want to do is from my sidecar container pull specific file, and relaunch the other container, with the new set of files.
Is it possible, or a better solution exist ?
Thanks

git-sync is a simple command that pulls a git repository into a local directory. It is a perfect "sidecar" container in Kubernetes - it can periodically pull files down from a repository so that an application can consume them.

Related

Automatically transfer files between containers using Kubernetes

I want to make a container that is able to transfer files between itself and other containers on the cluster. I have multiple containers that are responsible for executing a task, and they are waiting to get an input file to do so. I want a separate container to be responsible for handling files before and after the task is executed by the other containers. As an example:
have all files on the file manager container.
let the file manager container automatically copy a file to a task executing container.
let task executing container run the task.
transfer the output of the task executing container to the file manager container.
And i want to do this automatically, so that for example 400 input files can be processed to output files in this way. What would be the best way to realise such a process with kubernetes? Where should I start?
A simple approach would be to set up the NFS or use the File system like AWS EFS or so.
You can mount the File system or NFS directly to POD which will be in ReadWriteMany access method.
ReadWriteMany - Multiple POD can access the single file system.
If you don't want to use the Managed service like EFS or so you can also set up the file system on K8s checkout the MinIO : https://min.io/
All files will be saved in the File system and as per POD requirement, it can simply access it from the file system.
You can create different directories to separate the outputs.
If you want only read operation, meaning all PODs can read the files only you can also set up the ReadOnlyMany access mode.
If you are GCP you can checkout this nice document : https://cloud.google.com/filestore/docs/accessing-fileshares

Is there an option to copy image between nodes in kubernetes cluster?

I have a case where we have to patch the docker image in k8s node and retag it to start over the old one. This process ain't so easy and obvious, because I have several nodes.
Therefore, could I do retag process only on one node and then copy a new image to other nodes? If there is a possibility to do so, then should I delete the old image before copying retagged one?
I advise you to clone your deployement and use your retaged image for your new nodes, and scale down the old deployement with the old image tag.
LP
It's not possible to do cluster to cluster copying. You'd need to use kubectl cp to copy it locally, then copy the file back:
kubectl cp :/tmp/test /tmp/test
kubectl cp /tmp/test :/tmp/test
If you are trying to share files bet,nx4ween pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern.
Best Practice:
could I do retag process only on one node and then copy a new image to other nodes?
Moreover you can create a private repo registry and push/pull your docker images from there.
So make change in your image, push to repo, now all nodes will able to pull the new image.
Ref: Setting Up a Private Docker Registry on Ubuntu 18.04
then should I delete the old image before copying retagged one
No, use image versioning.
Lets assume you are using image MyImage:1.1, now you make some changes and create new image with version 1.2, so your image will be MyImage:1.2
Now in your deployment file, change your image name to MyImage:1.2, and create the deployment. Now your deployment will upgraded with new image.
You can use Rolling Update for the upgrade strategy for zero downtime.
Moral :
In new IT world, we mostly work in multiple clusters with many nodes. We have regular changes or customization as per the client demand or business met. We cant just make change in single node and then pushing it to everyone 1-1, trust me it is very hectic.

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

Is there any way I can edit file in the container and restart it?

Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.

Docker application deployment

My web application consists from 3 docker containers: app (main container with code), redis and node. I have the deployment shell script which do the following things:
clones master from git (git clone <...> $REVISION)
removes all files from document root directory (rm -rf $PROJECT_DIR)
move everything cloned into document root (mv $REVISION $PROJECT_DIR)
stop all running containers: (docker-compose stop)
remove all stopped containers (docker-compose rm -f)
build containers (docker-compose build)
run all built containers (docker-compose up -d)
run all init and start scripts inside the containers via docker exec (for example: config compilers, nginx reload)
And this works fine for me, but I have several doubts in this scheme:
In step 6, if I don't change files into node container, it will use already built image - it is fast. But if I change something, the container will build again - it is slow and increases unused images
In the worst case (when I made changes into node code) deployment lasts maybe about 2-3 minutes, in the best case - about 30 seconds. But even then it's a downtime for some users.
As I think, I need the availability to build the new container (in parralel of old container continue working), and only after a successfull status - change the tag of latest container, which is used by the app. How can I do this?
Will be very thankful for your comments.
What I do is tag all my images by version in addition to tagging then "latest". So I have one image with multiple tags. Just tag it with more than one. When you tag by version, it lets you move around the "latest" tag without problems:
docker build -t=myApp .
docker tag myApp:latest myApp:0.8.1
Now when you docker images you'll see the same image listed twice, just with different tags (both "latest" and "0.8.1"). So when you go to build something like you mention:
# the original container is still running while this builds ...
docker build -t=myApp .
# now tag "latest" to the newest version
docker tag myApp:latest myApp:0.8.2
# and now you can just stop and restart the container ...
docker rename myApp myApp-old
docker run -d --name=myApp -p 80:80 myApp:latest
This is something you could do, but it looks like you're really needing a way to swap containers without any downtime. Zero-downtime container changes.
There is a process I have used for a couple of years now of using an Nginx reverse proxy for your Docker containers. Jason Wilder details in this blog post the process for doing so.
I'll give you an overview of what this will do for you. The jwilder/nginx-proxy docker image will serve as a reverse proxy for your containers, and by default it round-robin load-balances inbound connections to containers based on the hostname. After you build and run a container with the same VIRTUAL_HOST environment variable, nginx-proxy automatically round-robin load-balances the two containers. This way, you can start the new container, and it will begin servicing requests. Then you can just bring down your other, old container. Zero-downtime updates.
Just some details: The nginx-proxy image uses Jason Wilder's docker-gen utility to automatically grab the docker container information and then route requests to each. What this means is that you start your normal containers with a new environment variable (VIRTUAL_HOST) and nginx-proxy will automatically begin routing inbound requests to the container. This is best used to "share" a port (e.g. tcp/80) among many containers. Also this reverse proxy means it can handle HTTPS as well as HTTP authentication, so that you don't have to handle it inside your web containers. The backend is unencrypted (HTTP) but since it's on the same host, no problem.