How to change a pod name - kubernetes

I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?
I am aware the pod name seems set in the helm file, in my values.yaml, I have this:
...
hosts:
- host: staging.application.com
paths:
...
- fullName: application
svcPort: 80
path: /*
...
Since the application is running in the prod and staging environment, and the pod name is just something like application-695496ec7d-94ct9, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:
hosts:
- host: staging.application.com
paths:
...
- fullName: application-staging
svcPort: 80
path: /*
I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the fullnameOverride, but it's empty so it should be fine.

...the pod name still remains the same
The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.
Look into your helm template that define the Deployment spec for the pod, search for the name and see which helm value was assigned to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox # <-- change & you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
Save the spec and apply, check with kubectl get pods --selector app=busybox. You should see 1 pod with name busybox prefix. Now if you open the file and change the name to custom and re-apply and get again, you will see 2 pods with different name prefix. Clean up with kubectl delete deployment busybox custom.
This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.

Related

GKE automating deploy of multiple deployments/services with different images

I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...

Google Stackdriver - how can I use my Kubernetes YAML labels for Stackdriver Log Query?

When using Google Stackdriver I can use the log query to find the exact log statements I am looking for.
This might look like this:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
However as you can see in this query argument resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm" that I am looking for a pod with the id 7f6cf95b6c-nkkbm. Because of this I can not use this Stackdriver view with this exact query if I deployed a new revision of my-app therefore having a new ID and the one in the curreny query becomes invalid or not locatable.
Now I don't always want to look for the new ID every time I want to have the current view of my my-app logs. So I tried to add a special label stackdriver: my-app to my Kubernetes YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
stackdriver: my-app <<<
Revisiting my newly deployed Pod I can assure that the label stackdriver: my-app is indeed existing.
Now I want to add this new label to use as a query argument:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
resource.labels.stackdriver=my-app <<< the kubernetes label
As you can guess this did not work otherwise I'd have no reason to write this question ;)
Any idea how the thing I am about to do can be achieved?
Any idea how the thing I am about to do can be achieved?
Yes! In fact, I've prepared an example to show you the whole process :)
Let's assume:
You have a GKE cluster named: gke-label
You have a Cloud Operations for GKE enabled (logging)
You have a Deployment named nginx with a following label:
stackdriver: look_here_for_me
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
stackdriver: look_here_for_me
replicas: 1
template:
metadata:
labels:
app: nginx
stackdriver: look_here_for_me
spec:
containers:
- name: nginx
image: nginx
You can apply this definition and send some traffic from the other pod so that the logs could be generated. I've done it with:
$ kubectl run -it --rm --image=ubuntu ubuntu -- /bin/bash
$ apt update && apt install -y curl
$ curl NGINX_POD_IP_ADDRESS/NONEXISTING # <-- this path is only for better visibility
After that you can go to:
GCP Cloud Console (Web UI) -> Logging (I used new version)
With the following query:
resource.type="k8s_container"
resource.labels.cluster_name="gke-label"
-->labels."k8s-pod/stackdriver"="look_here_for_me"
You should be able to see the container logs as well it's label:

kubernetes - exposing container info as environment variables

I'm trying to expose some of the container info as env variables reading the values from the pod's spec.template.spec.containers[0].name which seems to be not working. What would be the apiSpec for referencing the container fields inside the deployment template.The deployment template is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 2
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 8000
resources: {}
env:
- name: MY_CONTAINER_NAME
valueFrom:
fieldRef:
fieldPath: spec.template.spec.containers[0].name
The Downward API enables you to expose the pod’s own metadata to the processes
running inside that pod.
Currently, it allows you to pass the following information to your containers:
The pod’s name
The pod’s IP address
The namespace the pod belongs to
The name of the node the pod is running on
The name of the service account the pod is running under
The CPU and memory requests for each container
The CPU and memory limits for each container
The pod’s labels
The pod’s annotations
And that's it. As you can see the container port is not part of this list.
In general, the metadata available through the Downward API is fairly limited. If you need more, you’ll need to obtain it from the Kubernetes API server directly which you can do either by using client libraries or by using an ambassador container.
Two things: first, the container name is fixed -- it's defined by the PodSpec template -- are you perhaps thinking of the docker container's name (which will be a long generated name composed of the namespace, container name, pod UID, and restart count)? Because the docker container's name will for sure not be present in .spec.containers[0].name
Second, while I agree with David that I doubt kubernetes will let you run arbitrary fieldPath: selectors, if you're open to being flexible with your command: you can actually use the Pod's own ServiceAccount to query the kubernetes API at launch time to retrieve all of the Pod's info, including its status: structure which likely has a ton of the information you're after.

Invalid spec when I run pod.yaml

When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake)
Does it better to set volumeMounts in a Pod or in Deployment?
apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.
Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.
There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).
Does it better to set volumeMounts in a Pod or in Deployment?
Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.
Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.

Adding pod nodeSelector after creation

Using OpenShift 3.1/K8 1.1 and given a pod that has already been created with/without a nodeSelector.
I.e.
apiVersion: v1
kind: Pod
metadata:
generateName: blah-
labels:
name: blah
spec:
containers:
image: some/image
name: blah-image
ports:
- containerPort: 8080
nodeSelector: # can you add this after this pod has been created?
region: infra
Is it possible to change/add a nodeSelector?
Similar to the way you add/modify labels
You can change it in the associated ReplicationController (if any) but not in the definition of a running Pod. If you edit the RC as suggested the Pod itself must be recreated in order to start on the selected node(s).
In OpenShift if you are using a deployment config (the predecessor to Kube's Deployment object) you can edit your DC and add them. On the cli it's:
oc edit dc/NAME
That will trigger a rolling update that creates a new RC and scales down the old, unlabeled pods.