Kubernetes container grabbing variables from other container - kubernetes

UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;.
So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
Kubernetes pod running two of basically the same NodeJS application seems to be taking environment variables from another container, I logged the variable and it logged me the correct one but when it makes a request it seems to show two different results.
These variables are taken from two different secrets.
I have checked inside of each container that they do indeed have different env variables but for some reason inside of NodeJS when it makes these requests out to a third-party API it grabs both of the variables.
Yes, they do have the same name.
In the image below you, can see some logs these entries show the Authorization header for an http request, and this header is taken from an environment variable. Technically speaking it should always stay the same but it grabs the other one for some reason as well.
Here is the pod in YAML:
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: <REDACTED>/32
cni.projectcalico.org/podIPs: <REDACTED>32
kubectl.kubernetes.io/restartedAt: '2021-01-20T15:29:12Z'
labels:
app: mimercado-api
pod-template-hash: 77fb65575
name: mimercado-deployment-77fb65575-tpbsp
namespace: default
spec:
containers:
- envFrom:
- secretRef:
name: secrets-mimercado-a
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-a
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-a-logdir
- envFrom:
- secretRef:
name: secrets-mimercado-b
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-b
ports:
- containerPort: 8085
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-b-logdir
imagePullSecrets:
- name: regcred
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-a/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-a-logdir
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-b/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-b-logdir

There is still a lot of unknown regarding your overall config but if it can help, here is the potential issues that I see.
The fact that your requests return each secrets in such a consistent manner leads me to believe that your pod configuration might be fine, but something else is routing your requests to both containers. This is easy to verify. Display simultaneously the logs of both containers by running the following commands in two different terminals:
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-a
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-b
Send some requests like you did in your screenshot. If your requests appear to be distributed to both containers, it means that something is miss-configured in your service or ingress.
You might have old resources, still around, with slightly different configurations or your service label selector is matching more than just your pod. Check that only this pod, only one service and only one ingress are present. Also check that you don't have other deployments/pods/services with labels that might be overlapping with our pod.
You are using envFrom which load all the entries from your secret into your environment. Check that you don't have both entries in one of your secret. You can also switch to the env form to be safe:
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: secrets-mimercado-a
key: my-secret-key
This is probably not even possible but... I don't see any config to change the port on which your app is listening. containerPort only tells kubernetes which port your container is using but node on which port your node app should bind. It shouldn't be possible for both container to bind to the same port of the pod, but if you are running a deployment and not a single pod some pod of your deployment might have different containers bound to a specific port.

UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;. So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?

Related

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Using a variable within a path in Kubernetes

I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.
I want to use a shared volume to place the javascript bundle after it's built.
So far, I have the following deployment file (with irrelevant pieces removed):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
To the best of my ability, I have followed these guides:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
One other things to point out is that I'm trying to run this locally atm using minikube.
EDIT: The Dockerfile I used to build this image is:
FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g #vue/cli#latest
CMD ["npm", "run", "build"]
I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.
Problem
The logs from the personal-site container reveal:
- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?
I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".
Question
What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?
The best way is to customize your image's entrypoint as following:
Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)
cp -r /var/app/dist/* /opt/dist
PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.
Now use /opt/dist instead..:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
Good luck!
If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

How to run binary using kuberneates config-map

I used config map with files but i am experimenting with portable services like supervisor d and other internal tools.
we have golang binary that can be run in any image. what i am trying is to run these binary using configmap.
Example :-
We have a internal tool written in Go(size is less than 7MB) can be store in config map and we want to mount that config map inside kuberneates pod and want to run it inside pod
Question :- does anyone use it ? Is it a good approach ? What is the best practice ?
I don't believe you can put 7MB of content in a ConfigMap. See here for example. What you're trying to do sounds like a very unusual practice. The standard practice to run binaries in Pods in Kubernetes is to build a container image that includes the binary and configure the image or the Pod to run that binary.
I too faced similar issue while storing elastic.jks keystore binary file in k8s pod.
AFAIK there are two options:
Make use of configmap to store binary data. Check this out.
OR
Store your binary file remotely somewhere like in s3 bucket and pull that binary before running actual pod using initContainers concept.
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: myapp-container
image: alpine:3.1
command: ['sh', '-c', 'if [ -f /jks/elastic.jks ]; then sleep 99999; fi']
volumeMounts:
- name: jksdata
mountPath: /jks
initContainers:
- name: init-container
image: atlassian/pipelines-awscli
command: ["/bin/sh","-c"]
args: ['aws s3 sync s3://my-artifacts/$CLUSTER /jks/']
imagePullPolicy: IfNotPresent
volumeMounts:
- name: jksdata
mountPath: /jks
env:
- name: CLUSTER
value: dev-elastic
volumes:
- name: jksdata
emptyDir: {}
restartPolicy: Always
As #amit-kumar-gupta mentioned the configmap size constraint.
I recommend the second way.
Hope this helps.

Do pods share the filesystems, similar to how they share same network namespace?

I have created a pod with two containers. I know that different containers in a pod share same network namespace (i.e.,same IP and port space) and can also share a storage volume between them via configmaps. My question is do the pods also share same filesystem. For instance, in my case I have one container 'C1' that generates a dynamic file every 10 min in /var/targets.yml and I want the the other container 'C2' to read this file and perform its own independent action.
Is there a way to do this, may be some workaround via configmaps? or do I have to access these file via networking since each container have their own IP(But this may not be a good idea when it comes to POD restarts). Any suggestions or references please?
You can use an emptyDir for this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: generating-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: gcr.io/google_containers/test-webserver
name: consuming-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
But be aware, that the data isn't persistent during container recreations.