Is it a bad idea to include Docker-Compose alongside Kubernetes, Minikube and Skaffold on your team's dev enviroment? - kubernetes

Ideally, we would only want to stick to Minikube and Scaffold.
But there are many cases in which we would like to enable 2-way syncing of volumes so that changes in a specific container directory are reflected on a directory of the host machine.
We currently use kubectl to copy directories and files manually from the pod unto a local directory. But we would like to automate this step.
Docker-Compose makes it very easy to set this up by defining a rw volume to a service:
services:
myService:
image: some/image
volumes:
- /some-host/path:/some-container/path:rw
So whenever we need to reflect changes into our local environment, we would stop skaffold, start docker-compose, and make the changes necessary on the container so that they are automatically reflected locally.
The issue is that if we want to make a change to one of the services in the system we now have to reflect these changes on our k8s deployments, as well as our docker-compose file. These would include reflecting changes to secrets, config maps, etc... and we might end up with a really complicated project.
Is this a bad idea? Is there a better approach?

You can have a look into Tilt or Telepresence. We currently are working on changing our local development environment from docker-compose to a microk8s based approach and are looking into those two approaches, as we are too facing the issue of shared volumes which are not supported out of the box in microk8s.
Just an idea, we will have to see ourselves what solution works best for us :)

Related

Latest AWX version with docker-compose for production

Trying to configure AWX runtime using Docker with Docker Compose. With image quay.io/ansible/awx:21.7.0 it seems a little tricky. I don't want to set up Kubernetes and use AWX Operator - don't have resources and tasks for this complicity, just redundant tools. All I need - it is running Docker process with some additional services in my infrastructure (for example Traefik and SystemD services, AWX one of them).
Does anyone have moving on this way? Trying to find production Dockerfile (I think it uses in prod, right?) and prepare Django environment to work inside docker-compose (env vars, networks, resources, services).
I'll be updating this post with my results. Thanks guys, I hope I'm not alone with this problem.

Kubernetes configMap or persistent volume?

What is the best approach to passing multiple configuration files into a POD?
Assume that we have a legacy application that we have to dockerize and run in a Kubernetes environment. This application requires more than 100 configuration files to be passed. What is the best solution to do that? Create hostPath volume and mount it to some directory containing config files on the host machine? Or maybe config maps allow passing everything as a single compressed file, and then extracting it in the pod volume?
Maybe helm allows somehow to iterate over some directory, and create automatically one big configMap that will act as a directory?
Any suggestions are welcomed
Create hostPath volume and mount it to some directory containing config files on the host machine
This should be avoided.
Accessing hostPaths may not always be allowed. Kubernetes may use PodSecurityPolicies (soon to be replaced by OPA/Gatekeeper/whatever admission controller you want ...), OpenShift has a similar SecurityContextConstraint objects, allowing to define policies for which user can do what. As a general rule: accessing hostPaths would be forbidden.
Besides, hostPaths devices are local to one of your node. You won't be able to schedule your Pod some place else, if there's any outage. Either you've set a nodeSelector restricting its deployment to a single node, and your application would be done as long as your node is. Or there's no placement rule, and your application may restart without its configuration.
Now you could say: "if I mount my volume from an NFS share of some sort, ...". Which is true. But then, you would probably be better using a PersistentVolumeClaim.
Create automatically one big configMap that will act as a directory
This could be an option. Although as noted by #larsks in comments to your post: beware that ConfigMaps are limited in terms of size. While manipulating large objects (frequent edit/updates) could grow your etcd database size.
If you really have ~100 files, ConfigMaps may not be the best choice here.
What next?
There's no one good answer, not knowing exactly what we're talking about.
If you want to allow editing those configurations without restarting containers, it would make sense to use some PersistentVolumeClaim.
If that's not needed, ConfigMaps could be helpful, if you can somewhat limit their volume, and stick with non-critical data. While Secrets could be used storing passwords or any sensitive configuration snippet.
Some emptyDir could also be used, assuming you can figure out a way to automate provisioning of those configurations during container startup (eg: git clone in some initContainer, and/or some shell script contextualizing your configuration based on some environment variables)
If there are files that are not expected to change over time, or whose lifecycle is closely related to that of the application version shipping in your container image: I would consider adding them to my Dockerfile. Maybe even add some startup script -- something you could easily call from an initContainer, generating whichever configuration you couldn't ship in the image.
Depending on what you're dealing with, you could combine PVC, emptyDirs, ConfigMaps, Secrets, git stored configurations, scripts, ...

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

What is the right way to provision nodes with static content in Amazon EKS?

I have an application that loads a .conf file and some additional files on startup. Now I want to run this app in Amazon EKS. What is the best way to inject these files into a pod in Kubernetes? I tried copying them into a directory on the node and mounting that directory in the pod via hostpath. That works but doesn't feel the right way to do it. Does EKS have any autoprovision tool for this?
If it's a fixed config file for your app, you can even burn it inside docker image, i.e. copy file in your Dockerfile
If it needs to be configurable during deployment (e.g. it's environment-specific), then indeed, as mentioned by #anmolagrawal above, ConfigMap is the right way:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
If you can modify your app to rely on env vars or command-line arguments, it will make your life a lot simpler, you can just pass those values in the Pod spec, no need for ConfigMap.
But you definitely shouldn't be managing yourself any app-specific content on the Kubernetes nodes.

Is there any way I can edit file in the container and restart it?

Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.