I am currently learning Kubernetes, and i am facing a bit of a wall.
I try to pass environmentalvariables from my YAML file definition
to my container. But the variables seem not to be present afterwards.
kubectl exec <pod name> -- printenv gives me the list of environmental
variables. But the ones i defined in my YAML file is not present.
I defined the environment variables in my deployment as shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-boot
labels:
app: hello-world-boot
spec:
selector:
matchLabels:
app: hello-world-boot
template:
metadata:
labels:
app: hello-world-boot
containers:
- name: hello-world-boot
image: lightmaze/hello-world-spring:latest
env:
- name: HELLO
value: "Hello there"
- name: WORLD
value: "to the entire world"
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
selector:
app: hello-world-boot
Hopefully someone can see where i failed in the YAML :)
If I correct the errors in your Deployment configuration so that it looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-boot
labels:
app: hello-world-boot
spec:
selector:
matchLabels:
app: hello-world-boot
template:
metadata:
labels:
app: hello-world-boot
spec:
containers:
- name: hello-world-boot
image: lightmaze/hello-world-spring:latest
env:
- name: HELLO
value: "Hello there"
- name: WORLD
value: "to the entire world"
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
And deploy it into my local minikube instance:
$ kubectl apply -f pod.yml
Then it seems to work as you intended:
$ kubectl exec -it hello-world-boot-7568c4d7b5-ltbbr -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin
HOSTNAME=hello-world-boot-7568c4d7b5-ltbbr
TERM=xterm
HELLO=Hello there
WORLD=to the entire world
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
LANG=C.UTF-8
JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
JAVA_VERSION=8u212
JAVA_ALPINE_VERSION=8.212.04-r0
HOME=/home/spring
If you look at the above output, you can see both the HELLO and WORLD environment variables you defined in your Deployment.
Related
Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc
I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value
Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
I want to use the Prometheus image to deploy a container as part of the local deployment. Usually one has to run the container with volume and bind-mount to get the configuration file (prometheus.yml) into the container:
docker run \
-p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
How can I achieve this with Kubernetes when using Skaffold?
Your Kubernetes configuration will be something like this,
You can specify the port number and volume mounts. The important sections for mounting are volumeMounts and volumes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-prom
spec:
selector:
matchLabels:
app: my-prom
template:
metadata:
labels:
app: my-prom
spec:
containers:
- name: my-prom
image: prometheus:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9090
volumeMounts:
- name: prom-config
mountPath: /path/to/prometheus.yml
volumes:
- name: prom-config
hostPath:
path: /etc/prometheus/prometheus.yml
---
apiVersion: v1
kind: Service
metadata:
name: my-prom
spec:
selector:
app: my-prom
ports:
- port: 9090
targetPort: 9090
save the Kubernetes config file in a folder and add below config to skaffold.yaml with the path to K8s config file,
deploy:
kubectl:
manifests:
- k8s/*.yaml
I have a problem with Kubernetes on my local machine. I want to create a pod with a database so I prepared a deployment file with service.
apiVersion: v1
kind: Service
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: bid-service-db
tier: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
selector:
matchLabels:
app: bid-service-db
strategy:
type: Recreate
template:
metadata:
labels:
app: bid-service-db
tier: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: postgres
image: postgres:13
imagePullPolicy: Never
name: bid-service-db
ports:
- containerPort: 5432
name: bid-service-db
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres-persistance-storage
persistentVolumeClaim:
claimName: bid-service-db-volume
status: {}
I am applying this file with k apply -f bid-db-deployment.yaml. k get all returns that only service was created, but pod not started. What can I do in this case? How to troubleshoot that?
if you didn't get any errors on the 'apply' you can get the failure reason by:
kubectl describe deployment/DEPLOMENT_NAME
Also, you can take only the deployment part and put it in a separate YAML file and see if you get errors.
Since after restart the cluster it worked for you, a good ideia next times should be verify the logs from kube-api and kube-controller pods using the command:
kubectl logs pn kube-system <kube-api/controller_pod_name>
To get the list of your deployments in all name space you can use the command:
kubectl get deployments -A
My Kubernetes StorageClass volume doesn't retain existing data when the pod is deleted and deployed back with my postgresql database. When I delete the pod, the new pod is created but the database is empty.
I have followed variations of the different versions of the tutorials (https://kubernetes.io/docs/concepts/storage/persistent-volumes/) but nothing seems to work.
I paste all the YAML files cause the problem might be in the combination.
storage-google.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: spingular-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east4-a
jhipsterpress-postgresql.yml
apiVersion: v1
kind: Secret
metadata:
name: jhipsterpress-postgresql
namespace: default
labels:
app: jhipsterpress-postgresql
type: Opaque
data:
postgres-password: NjY0NXJxd24=
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: jhipsterpress-postgresql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: spingular-pvc
containers:
- name: postgres
image: postgres:10.4
env:
- name: POSTGRES_USER
value: jhipsterpress
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/
---
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
selector:
app: jhipsterpress-postgresql
ports:
- name: postgresqlport
port: 5432
type: LoadBalancer
jhipsterpress-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jhipsterpress
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jhipsterpress
version: "v1"
template:
metadata:
labels:
app: jhipsterpress
version: "v1"
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 jhipsterpress-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: jhipsterpress-app
image: galore/jhipsterpress
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://jhipsterpress-postgresql.default.svc.cluster.local:5432/jhipsterpress
- name: SPRING_DATASOURCE_USERNAME
value: jhipsterpress
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
jhipsterpress-service.yml
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress
namespace: default
labels:
app: jhipsterpress
spec:
selector:
app: jhipsterpress
type: LoadBalancer
ports:
- name: http
port: 8080
When I included a Retain Policy I was getting this error:
#cloudshell:~ (academic-veld-230622)$ kubectl apply -f storage-google.yaml
error: error validating "storage-google.yaml": error validating data:
ValidationError(PersistentVolumeClaim.spec): unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaimSpec; if you choose to ignore these errors, turn validation off with --validate=false
Please, if you know of a complete example on a public image that works (in postgresql, I can make it work with Mongo), I will really appreciate it.
Thanks to all.
Note that for this to work you need to have your PVC dynamically provision a PV to satisfy its requirements, then there will be a permanent binding between the PVC and PV and every time your workload uses the PVC then it will use the same PV. Specifically indicated by this excerpt:
If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC
If in your case the Google Persistent Disk is being provisioned by the PVC, and you can verify that on GCP it's the same PV used every time, then it's probably an issue with the pod startup process where it's removing all the data. (Is there any reason why you are using /var/lib/postgresql/ vs /var/lib/postgresql?)
Also, persistentVolumeReclaimPolicy: Retain applies to a PV, not a PVC. For dynamically provisioned PVs the value is Delete. In your case, it wouldn't apply because your dynamically provisioned volume should be bound to your PVC. In other words, you are not reclaiming the volume.
Having said all that the recommended way to deploy a DB is using StatefulSets similar to this mysql example using a volumeClaimTemplate.