Unable to create POD in kubernetes1 Getting error - kubernetes

I am learning Kubernetes and creating a Pod with configmap. I created a yaml file to create a POD. But I am getting below error
error: error parsing yuvi.yaml: error converting YAML to JSON: YAML: line 13: mapping values are not allowed in this context
My configmap name: yuviconfigmap
YAML file:
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap

The yaml is not as per what kubernetes expects. Below should work.
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap

Related

Kubernetes Deployment: Is there a way to copy a file from a mount directory to the root directory / after the deployment manifest is applied

The problem is your mount path can not be / but I need to move the demo.txt file into / once the container is created.
I have this sample deployment.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-configfile
data:
myfile: |
This my demo file's text info
This is just dummy text
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
name: demo-configmaps-test
template:
metadata:
labels:
name: demo-configmaps-test
spec:
containers:
- name: demo-container
image: alpine
imagePullPolicy: Always
command: ['sh', '-c', 'sleep 36000']
volumeMounts:
- name: demo-files
mountPath: /demo/files
volumes:
- name: demo-files
configMap:
name: demo-configfile
items:
- key: myfile
path: demo.txt

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

PersistentVolumeClaim unknown in kubernetes

i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc

Kubernetes unknown field "volumes"

I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.