I would like to update image version for my running kubernetes pod.
My current config is:
spec:
containers:
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:latest
I would like to update it to
spec:
containers:
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
I have tried to run an apply as I understood by reading documentation kubectl apply -f jenkins.yaml --namespace=infrastructure, but nothing changed (nor my pod was restarted automatically).
Can someone advice how to do this?
You can use replace
kubectl replace -f jenkins.yaml --namespace=infrastructure
Probably image: jenkins/jenkins:2.247 is the same as image: jenkins/jenkins:latest and because of that, no update occurred.
Tip: Try to not use latest tag but to set the specific tag always.
Related
Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!
I have created kubernetes cluster on digitalocean. and I have deployed k6 as a job on kubernetes cluster.
apiVersion: batch/v1
kind: Job
metadata:
name: benchmark
spec:
template:
spec:
containers:
- name: benchmark
image: loadimpact/k6:0.29.0
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=./test.json", "/etc/k6-config/script.js"]
volumeMounts:
- name: config-volume
mountPath: /etc/k6-config
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: k6-config
this is how my k6-job.yaml file look like. After deploying it in kubernetes cluster I have checked the pods logs. it is showing permission denied error.
level=error msg="open ./test.json: permission denied"
how to solve this issue?
The k6 Docker image runs as an unprivileged user, but unfortunately the default work directory is set to /, so it has no permission to write there.
To work around this consider changing the JSON output path to /home/k6/out.json, i.e.:
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=/home/k6/test.json", "/etc/k6-config/script.js"]
I'm one of the maintainers on the team, so will propose a change to the Dockerfile to set the WORKDIR to /home/k6 to make the default behavior a bit more intuitive.
My problem is the following:
I should execute the "envsubst" command from inside a POD, I'm using Kubernetes.
Actually I'm executing the command manually accessing to the POD and then executing it, but I would do it automatically inside my configuration file, which is a .yml file.
I've found some references on the web and I've tried some examples, but the result was always that the POD didn't start correctly, presenting the error CrashBackLoopOff error.
I would execute the following command:
envsubst < /usr/share/nginx/html/env_token.js > /usr/share/nginx/html/env.js
There's the content of my .yml file (not all, just the most relevant part)
spec:
containers:
- name: example 1
image: imagename/docker_console:${deploy.version}
env:
- name: PIPPO_ID
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: accessKey
- name: PIPPO
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: secretAccessKey
- name: ENV
value: ${deploy.env}
- name: CREATION_TIMESTAMP
value: ${deploy.creation_timestamp}
- name: TEST
value: ${consoleenv}
command: ["/bin/sh"]
args: ["envsubst", "/usr/share/nginx/html/assets/env_token.js /usr/share/nginx/html/assets/env.js"]
The final two rows, "command" and "args", should be written in this way? I've already tried to put the "envsubst" in the command but it didn't work. I've also tried using commas in the args row to separate each parameter, same error.
Do you have some suggestions you know they work for sure?
Thanks
I'm trying to setup GCR with kubernetes
and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'
Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
used this to add the secret, and it said created
kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail#gmail.com
How can I debug this, any ideas are welcome!
It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute#XXXXXX.gserviceaccount.com which is missing Editor role.
Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].
Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2].
[1] https://cloud.google.com/container-registry/docs/access-control
[2]https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine”
If the VM instance for pushing or pulling images and the Container Registry storage bucket are in the same Google Cloud Platform project, the Compute Engine default service account is configured with appropriate permissions to push or pull images.
If the VM instance is in a different project or if the instance uses a different service account, you must configure access to the storage bucket used by the repository.
By default, a Compute Engine VM has the read-only access scope configured for storage buckets. To push private Docker images, your instance must have read-write storage access scope configured as described in Access scopes.
Please have 1 for further reference:
Please follow below table as 2:
Action Permission Role Role Title
Pull (Read Only) - storage.objects.get roles/storage.objectViewer Storage Object Viewer
storage.objects.list
Also, you could share if there having any error code as you are having trouble in any steps.
I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test