Use other deployment IP in YAML deployment configuration - kubernetes

I'm doing a prototype where one service depends on an availability of other. Scenario:
Service A is assumed to be already available in a local network. It was either deployed by K8S or manually (or even a managed one provided by AWS etc.).
Service B depends on environment variable SERVICE_A_IP and won't start without it. It's treated as a black box and can't be modified.
I want to pass Service A IP to Service B through K8S YAML configuration file. Perfect syntax for this occasion:
...
env:
- name: SERVICE_A_IP
valueFrom:
k8sDeployment:
name: service_a
key: deploymentIP
...
During the prototyping stage Service A is an another K8S deployment but it might not be so in a production environment. Thus I need to decouple from SERVICE_A_SERVICE_IP that will be available to Service B (given it's deployed after Service A). I'm not into DNS discovery as well as it would require container modification which is far from a perfect solution.
If I would do it manually with kubectl (or with a shell script) it would be like the following:
$ kubectl run service_a --image=service_a:latest --port=8080
$ kubectl expose deployment service_a
$ SERVICE_A_IP="$(kubectl describe service service_a | \
grep IP: | \
cut -f2 -d ':' | \
xargs)"
$ kubectl run service_b --image=service_b:latest --port=8080 \
--env="SERVICE_A_IP=${SERVICE_A_IP}"
It works. Though I want to do the same using YAML configuration without injecting SERVICE_A_IP into configuration file with shell (basically modifying the file).
Is there any way to do so? Please take the above setting as set in stone.
UPDATE
Not the best way though still:
$ kubectl create -f service_a.yml
deployment "service_a" created
service "service_a" created
$ SERVICE_A_IP="$(kubectl describe service service_a | \
grep IP: | \
cut -f2 -d ':' | \
xargs)"
$ kubectl create configmap service_a_meta \
--from-literal="SERVICE_A_IP=${SERVICE_A_IP}"
And then in service_b.yml:
...
env:
- name: SERVICE_A_IP
valueFrom:
configMapKeyRef:
name: service_a_meta
key: SERVICE_A_IP
...
That will work but still involves some shell and generally feels way too hax.

You can use attach handlers to lifecycle events for update your environment variables on start.
Here is an example:
apiVersion: v1
kind: Pod
metadata:
name: appB
spec:
containers:
- name: appB
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export SERVICE_B_IP=$(host <SERVICE_B>.<SERVICE_B_NAMESPACE>.svc.cluster.local)"]
Kubernetes will run preStart script each time when pod with your appB container is starting right in appB container before execution of the main application.
But, because of that description:
PostStart
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
You need to add some sleep for your main app before the real start just to be sure that hook will be finished before application will be started.

Related

Odd Kubernetes behaviour in AWS EKS cluster

In an EKS cluster (v1.22.10-eks-84b4fe6) that I manage I've spotted a behavior that I had never seen before (or that I missed completely...) => In a namespace with an application running in created by a public helm chart, if I create a separate new unrelated pod (a simple empty busybox with a sleep command in it) it'll automatically mount some environmental variables always starting with the name of the namespace and as referring to the available services which are related to the helm chart/deployment already in it. I'm not sure I understand this behavior, I've tested this in several other namespaces with helm charts deployed as well and I get the same results (each time with different env vars obviously).
An example in a namespace with this chart installed -> https://github.com/bitnami/charts/tree/master/bitnami/keycloak
testpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testpod
namespace: keycloak-18
spec:
containers:
- image: busybox
name: testpod
command: ["/bin/sh", "-c"]
args: ["sleep 3600"]
When in the pod:
/ # env
KEYCLOAK_18_METRICS_PORT_8080_TCP_PROTO=tcp
KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
KEYCLOAK_18_METRICS_SERVICE_PORT=8080
KEYCLOAK_18_METRICS_PORT=tcp://10.100.104.11:8080
KEYCLOAK_18_PORT_80_TCP_ADDR=10.100.71.5
HOSTNAME=testpod
SHLVL=2
KEYCLOAK_18_PORT_80_TCP_PORT=80
HOME=/root
KEYCLOAK_18_PORT_80_TCP_PROTO=tcp
KEYCLOAK_18_METRICS_PORT_8080_TCP=tcp://10.100.104.11:8080
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_ADDR=10.100.155.185
KEYCLOAK_18_POSTGRESQL_SERVICE_HOST=10.100.155.185
KEYCLOAK_18_PORT_80_TCP=tcp://10.100.71.5:80
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PORT=5432
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PROTO=tcp
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KEYCLOAK_18_POSTGRESQL_PORT=tcp://10.100.155.185:5432
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT=5432
KEYCLOAK_18_SERVICE_PORT_HTTP=80
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT_TCP_POSTGRESQL=5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP=tcp://10.100.155.185:5432
KEYCLOAK_18_METRICS_SERVICE_PORT_HTTP=8080
KEYCLOAK_18_SERVICE_HOST=10.100.71.5
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
KUBERNETES_SERVICE_HOST=10.100.0.1
PWD=/
KEYCLOAK_18_METRICS_PORT_8080_TCP_ADDR=10.100.104.11
KEYCLOAK_18_METRICS_SERVICE_HOST=10.100.104.11
KEYCLOAK_18_SERVICE_PORT=80
KEYCLOAK_18_PORT=tcp://10.100.71.5:80
KEYCLOAK_18_METRICS_PORT_8080_TCP_PORT=8080
I've looked a bit into this and I've seen this doc https://kubernetes.io/docs/concepts/containers/container-environment/, but it states less variables than I can see myself
I may be behind on some Kubernetes features, does anyone have a clue?
Thanks!
What you are seeing is expected. Asserted from the official documentation:
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and
{SVCNAME}_SERVICE_PORT variables, where the Service name is
upper-cased and dashes are converted to underscores.
This behavior is not EKS specific.

Making use of ansible's dynamic kubernetes inventory in a playbook?

I'm trying to execute a few simple commands on a kubernetes pod in Azure. I've successfully done so with the localhost + pod-as-module-parameter syntax:
---
- hosts: localhost
connection: kubectl
collections:
- kubernetes.core
gather_facts: False
tasks:
- name: Get pod
k8s_info:
kind: Pod
namespace: my-namespace
register: pod_list
- name: Run command
k8s_exec:
pod: "{{pod_list.resources[0].metadata.name}}"
namespace: my_namespace
command: "/bin/bash -c 'echo Hello world'"
However, I want to avoid the repetition of specifying pod and namespace for every kubernetes.core module call, as well as parsing the namespace explicitly in every playbook.
So I got the kubernetes dynamic inventory plugin to work, and can see the desired pod in a group label_app_some-predictable-name, as confirmed by output of ansible-inventory.
What I don't get is if at this point I should be able to run regular command module (I couldn't get that to work at all), or if I need to keep using k8s_exec, which still requires pod and namespace to be specified explicitly (albeit now I can refer to the guest facts populated by the inventory plugin), on top of now requiring delegate_to: localhost:
---
- name: Execute command
hosts: label_app_some-predicatable-name
connection: kubectl
gather_facts: false
collections:
- kubernetes.core
tasks:
- name: Execute command via kubectl
delegate_to: localhost
k8s_exec:
command: "/bin/sh -c 'echo Hello world'"
pod: "{{ansible_kubectl_pod}}"
namespace: "{{ansible_kubectl_namespace}}"
What am I missing? Is there a playbook example that makes use of the kubernetes dynamic inventory?

Kubernetes: How to update a live busybox container's 'command'

I have the following manifest that created the running pod named 'test'
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World'"]
I want to update the pod to include the additional command
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
restartPolicy: Never
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World' > /home/my_user/logging.txt"]
What I tried
kubectl edit pod test
What resulted
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# pods "test" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`...
Other things I tried:
Updated the manifest and then ran apply - same issue
kubectl apply -f test.yaml
Question: What is the proper way to update a running pod?
You can't modify most properties of a Pod. Typically you don't want to directly create Pods; use a higher-level controller like a Deployment.
The Kubernetes documentation for a PodSpec notes (emphasis mine):
containers: List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.
In all cases, no matter what, a container runs a single command, and if you want to change what that command is, you need to delete and recreate the container. In Kubernetes this always means deleting and recreating the containing Pod. Usually you shouldn't use bare Pods, but if you do, you can create a new Pod with the new command and delete the old one. Deleting Pods is extremely routine and all kinds of ordinary things cause it to happen (updating Deployments, a HorizontalPodAutoscaler scaling down, ...).
If you have a Deployment instead of a bare Pod, you can freely change the template: for the Pods it creates. This includes changing their command:. This will result in the Deployment creating a new Pod with the new command, and once it's running, deleting the old Pod.
The sorts of very-short-lived single-command containers you show in the question aren't necessarily well-suited to running in Kubernetes. If the Pod isn't going to stay running and serve requests, a Job could be a better match; but a Job believes it will only be run once, and if you change the pod spec for a completed Job I don't think it will launch a new Pod. You'd need to create a new Job for this case.
I am not sure what the whole requirement is.
but you can exec to the pod and update the details
$ kubectl exec <pod-name> -it -n <namespace> -- <command to execute>
like,
$ kubectl exec pod/hello-world-xxxx-xx -it -- /bin/bash
if tty support shell, use "/bin/sh" to update the content or command.
Editing the running pod, will not retain the changes in manifest file. so in that case you have to run a new pod with the changes.

How to create an environment variable in kubernetes container

I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?
As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment
You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY

Recommended way to persistently change kube-env variables

We are using elasticsearch/kibana instead of gcp for logging (based on what is described here).
To have fluentd-elsticsearch pod's launched we've set LOGGING_DESTINATION=elasticsearch and ENABLE_NODE_LOGGING="true" in the "Compute Instance Template" -> "Custom metadata" -> "kube-env".
While this works fine when done manually it gets overwritten with every gcloud container clusters upgrade as a new Instance Template with defaults (LOGGING_DESTINATION=gcp ...) is created.
My question is: How do I persist this kind of configuration for GKE/GCE?
I thought about adding a k8s-user-startup-script but that's also defined in the Instance Template and therefore is overwritten by gcloud container clusters upgrade.
I've also tried to add a k8s-user-startup-script to the project metadata but that is not taken into account.
//EDIT
Current workaround (without recreating Instance Template and Instances) for manually switching back to elasticsearch is:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
gcloud compute ssh $node \
--command="sudo cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/; sudo rm /etc/kubernetes/manifests/fluentd-gcp.yaml";
done
kubelet will pick that up, kill fluentd-gcp and start fluentd-es.
//EDIT #2
Now running a "startup-script" DaemonSet for this:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: startup-script
namespace: kube-system
labels:
app: startup-script
spec:
template:
metadata:
labels:
app: startup-script
spec:
hostPID: true
containers:
- name: startup-script
image: gcr.io/google-containers/startup-script:v1
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
set -o errexit
set -o pipefail
set -o nounset
# Replace Google-Cloud-Logging with EFK
if [[ ! -f /etc/kubernetes/manifests/fluentd-es.yaml ]]; then
if [[ -f /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml ]]; then
# GCI images
cp -a /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml /etc/kubernetes/manifests/
elif [[ -f /srv/salt/fluentd-es/fluentd-es.yaml ]]; then
# Debian based GKE images
cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/
fi
test -f /etc/kubernetes/manifests/fluentd-es.yaml && rm /etc/kubernetes/manifests/fluentd-gcp.yaml
fi
There isn't a fully supported way to reconfigure the kube-env in GKE. As you've found, you can hack the instance template, but this isn't guaranteed to work across upgrades.
An alternative is to create your cluster without gcp logging enabled and then create a DaemonSet that places a fluentd-elasticsearch pod on each of your nodes. Using this technique you don't need to write a (brittle) startup script or rely on the fact that the built-in startup script happens to work when setting LOGGING_DESTINATION=elasticsearch (which may break across upgrades even if it wasn't getting overwritten).