How to share code base between Kubernetes pods? - kubernetes

I have 2 pods inside a single deployment yaml of kubernetes
one for code base and php-fpm together
one for nginx
how to share the code base folder to nginx ?
i dont expect to see any answer that using init command to copy the folder from pod to pod
EDIT
i also try to split frontend service (nginx) and backend service (fpm and the code), but as the application itself require complex nginx rewrite rules, therefore it wont work for my case
BUT according to this repo -> https://gist.github.com/matthewpalmer/741dc7a4c418318f85f2fa8da7de2ea1
it seem not possible to do it without COPY, but copy is super slow if u hv large file base
i wanna do the similar thing as same as docker-compose volume

As far as I can suggest based on the information you've given you have the following options:
a. Build a newer image of the Nginx image as your BASE image and copy all your source code to that image. Then reference that image in the Kubernetes Deployment.
OR
b. Add your source code to a ConfigMap and mount that in as a volume.
OR
c. Use an initContainer (Which you've already said you don't want to do).
All of these from my perspective seem wrong. I think it might be better to revisit why you're doing this and look at whether there are other options.
EDIT (More context now):
You don't need your code added to the Nginx container.
You just need to have the host resolvable. This can be achieved by adding a Service that points at your PHP code with the same name as you've defined in the upstream of the Nginx.
Look at this article: https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend

Related

Is it a bad idea to include Docker-Compose alongside Kubernetes, Minikube and Skaffold on your team's dev enviroment?

Ideally, we would only want to stick to Minikube and Scaffold.
But there are many cases in which we would like to enable 2-way syncing of volumes so that changes in a specific container directory are reflected on a directory of the host machine.
We currently use kubectl to copy directories and files manually from the pod unto a local directory. But we would like to automate this step.
Docker-Compose makes it very easy to set this up by defining a rw volume to a service:
services:
myService:
image: some/image
volumes:
- /some-host/path:/some-container/path:rw
So whenever we need to reflect changes into our local environment, we would stop skaffold, start docker-compose, and make the changes necessary on the container so that they are automatically reflected locally.
The issue is that if we want to make a change to one of the services in the system we now have to reflect these changes on our k8s deployments, as well as our docker-compose file. These would include reflecting changes to secrets, config maps, etc... and we might end up with a really complicated project.
Is this a bad idea? Is there a better approach?
You can have a look into Tilt or Telepresence. We currently are working on changing our local development environment from docker-compose to a microk8s based approach and are looking into those two approaches, as we are too facing the issue of shared volumes which are not supported out of the box in microk8s.
Just an idea, we will have to see ourselves what solution works best for us :)

Kubernetes configMap or persistent volume?

What is the best approach to passing multiple configuration files into a POD?
Assume that we have a legacy application that we have to dockerize and run in a Kubernetes environment. This application requires more than 100 configuration files to be passed. What is the best solution to do that? Create hostPath volume and mount it to some directory containing config files on the host machine? Or maybe config maps allow passing everything as a single compressed file, and then extracting it in the pod volume?
Maybe helm allows somehow to iterate over some directory, and create automatically one big configMap that will act as a directory?
Any suggestions are welcomed
Create hostPath volume and mount it to some directory containing config files on the host machine
This should be avoided.
Accessing hostPaths may not always be allowed. Kubernetes may use PodSecurityPolicies (soon to be replaced by OPA/Gatekeeper/whatever admission controller you want ...), OpenShift has a similar SecurityContextConstraint objects, allowing to define policies for which user can do what. As a general rule: accessing hostPaths would be forbidden.
Besides, hostPaths devices are local to one of your node. You won't be able to schedule your Pod some place else, if there's any outage. Either you've set a nodeSelector restricting its deployment to a single node, and your application would be done as long as your node is. Or there's no placement rule, and your application may restart without its configuration.
Now you could say: "if I mount my volume from an NFS share of some sort, ...". Which is true. But then, you would probably be better using a PersistentVolumeClaim.
Create automatically one big configMap that will act as a directory
This could be an option. Although as noted by #larsks in comments to your post: beware that ConfigMaps are limited in terms of size. While manipulating large objects (frequent edit/updates) could grow your etcd database size.
If you really have ~100 files, ConfigMaps may not be the best choice here.
What next?
There's no one good answer, not knowing exactly what we're talking about.
If you want to allow editing those configurations without restarting containers, it would make sense to use some PersistentVolumeClaim.
If that's not needed, ConfigMaps could be helpful, if you can somewhat limit their volume, and stick with non-critical data. While Secrets could be used storing passwords or any sensitive configuration snippet.
Some emptyDir could also be used, assuming you can figure out a way to automate provisioning of those configurations during container startup (eg: git clone in some initContainer, and/or some shell script contextualizing your configuration based on some environment variables)
If there are files that are not expected to change over time, or whose lifecycle is closely related to that of the application version shipping in your container image: I would consider adding them to my Dockerfile. Maybe even add some startup script -- something you could easily call from an initContainer, generating whichever configuration you couldn't ship in the image.
Depending on what you're dealing with, you could combine PVC, emptyDirs, ConfigMaps, Secrets, git stored configurations, scripts, ...

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

What is the right way to provision nodes with static content in Amazon EKS?

I have an application that loads a .conf file and some additional files on startup. Now I want to run this app in Amazon EKS. What is the best way to inject these files into a pod in Kubernetes? I tried copying them into a directory on the node and mounting that directory in the pod via hostpath. That works but doesn't feel the right way to do it. Does EKS have any autoprovision tool for this?
If it's a fixed config file for your app, you can even burn it inside docker image, i.e. copy file in your Dockerfile
If it needs to be configurable during deployment (e.g. it's environment-specific), then indeed, as mentioned by #anmolagrawal above, ConfigMap is the right way:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
If you can modify your app to rely on env vars or command-line arguments, it will make your life a lot simpler, you can just pass those values in the Pod spec, no need for ConfigMap.
But you definitely shouldn't be managing yourself any app-specific content on the Kubernetes nodes.

How to update kubernetes deployment without updating image

Background.
We are using k8s 1.7. We use deployment.yml to maintain/update k8s cluster state. In deployment.yml, pod's image is set to ${some_image}:latest. Once deployment is created, pod's image will update to ${some_image}:${build_num}, whenever there is code merge into master.
What happen now is, let's say if we need to modified the resource limited in deployment.yml and re-apply it. The image of deployment will be updated to ${some_image} :latest as well. We want to keep the image as it is in cluster state, without maintaining the actual tag in deployment.yml. We know that the replcas can be omitted in file, and it takes the value from cluster state by default.
Question,
On 1.7, the spec.template.spec.containers[0].image is required.
Is it possible to apply deployment.yml without updating the image to ${some_image}:latest as well (an argument like --ignore-image-change, or a specific field in deployment.yml)? If so, how?
Also, I see the image is optional in 1.10 documentation.
Is it true? if so, since which version?
--- Updates ---
CI build and deploy new image on every merge into master. At deploy, CI run the command kubectl set image deployment/app container=${some_image}:${build_num} where ${build_num} is the build number of the pipeline.
To apply deployment.yml, we run kubectl apply -f deployment.yml
However, in deployment.yml file, we specified the latest tag of the image, because it is impossible to keep this field up-to-date
Using “:latest” tag is against best practices in Kubernetes deployments for a number of reasons - rollback and versioning being some of them. To properly resolve this you should maybe rethink you CI/CD pipeline approach. We use ci-pipeline or ci-job version to tag images for example.
Is it possible to update deployment without updating the image to the file specified. If so, how?
To update pod without changing the image you have some options, each with some constraints, and they all require some Ops gymnastics and introduce additional points of failure since it goes against recommended approach.
k8s can pull the image from your remote registry (you must keep track of hashes since your latest is out of your direct control - potential issues here). You can check used hash on local docker registry of a node that pod is running from.
k8s can pull the image from local node registry (you must ensure that on all potential nodes for running pods at “:latest” is on the same page in local registry for this to work - potential issues here). Once there, you can play with container’s imagePullPolicy such that when CI tool is deploying - it uses apply of yaml (in contrast to create) and sets image policu to Always, immediately folowing by apply of image policy of Never (also potential issue here), restricting pulling policy to already pulled image to local repository (as mentioned, potential issues here as well).
Here is an excerpt from documentation about this approach: By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
more about how k8s is handling images and why latest tagging can bite back is given here: https://kubernetes.io/docs/concepts/containers/images/
In case you don't want to deal with complex syntax in deployment.yaml in CI, you have the option to use a template processor. For example mustache. It would change the CI process a little bit:
update image version in template config (env1.yaml)
generate deployment.yaml from template deployment.mustache and env1.yaml
$ mustache env1.yml deployment.mustache > deployment.yaml
apply configuration to cluster.
$ kubectl apply -f deployment.yaml
The main benefits:
env1.yaml always contains the latest master build image, so you are creating the deployment object using correct image.
env1.yaml is easy to update or generate at the CI step.
deployment.mustache stays immutable, and you are sure that all that could possibly change in the final deployment.yaml is an image version.
There are many other template rendering solutions in case mustache doesn't fit well in your CI.
Like Const above I highly recommend against using :latest in any docker image and instead use CI/CD to solve the version problem.
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29
Are you aware of the repo.example.com/some-tag#sha256:... syntax for pulling images from docker registry? It is almost exactly designed to solve the problem you are describing.
updated from a comment:
You're solving the wrong problem; the file is only used to load content into the cluster -- from that moment forward, the authoritative copy of the metadata is in the cluster. The kubectl patch command can be a surgical way of changing some content without resorting to sed (or worse), but one should not try and maintain cluster state outside the cluster