Kubernetes unknown field "volumes" - kubernetes

I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false

I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.

Related

How to use git-sync image as a sidecar in kubernetes that git pulls periodically

I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}

kubernetes Deployment PodName setting

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
I want to set the name of the pod Because it communicates internally based on the name of the pod
but when a pod is created, it cannot be used because the variable name is appended to the end.
How can I set the pod name?
It's better if you use, statefulset instead of deployment. Statefulset's pod name will be like <statefulsetName-0>,<statefulsetName-1>... And you will need a clusterIP service. with which you can bound your pods. see the doc for more details. Ref
apiVersion: v1
kind: Service
metadata:
name: test-svc
labels:
app: test
spec:
ports:
- port: 8080
name: web
clusterIP: None
selector:
app: test
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-StatefulSet
labels:
app: test
spec:
replicas: 1
serviceName: test-svc
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
Here, The pod name will be like this test-StatefulSet-0.
if you are using the kind: Deployment it won't be possible ideally in this scenario you can use kind: Statefulset.
Instead of POD to POD communication, you can use the Kubernetes service for communication.
Still, statefulset manage the pod name in the sequence
statefulsetname - 0
statefulsetname - 1
statefulsetname - 2
You can't.
It is the property of the pods of a Deployment that they do not have an identity associated with them.
You could have a look at Statefulset instead of a Deployment if you want the pods to have a state.
From the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
So, if you have a Statefulset object named myapp with two replicas, the pods will be named as myapp-0 and myapp-1.

Kubernetes - Install APOC Library to Neo4j

I am attempting to install the APOC library to a Neo4j instance running within a Kubernetes cluster. The APOC plugin is install, I can see "apoc" entries available after running CALL dbms.procedures() all labeled as worksOnSystem / false. How can I enable them?
Current YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-db
namespace: prod
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: neo4j-db
template:
metadata:
labels:
app: neo4j-db
spec:
containers:
- image: neo4j
name: neo4j
env:
- name: NEO4J_dbms_security_procedures_unrestricted
value: apoc.\\\*
- name: NEO4JLABS_PLUGINS
value: \[\"apoc\"\]
ports:
- containerPort: 7474
name: http
- containerPort: 7687
name: bolt
- containerPort: 7473
name: https
volumeMounts:
- name: neo4j-data
mountPath: /data
- name: neo4j-plugins
mountPath: /plugins
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-prod-gid-pvc
- name: neo4j-plugins
persistentVolumeClaim:
claimName: neo4j-prod-plugin-gid-pvc

Kubernetes volumeMount folder and file permissions?

Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}

IP Pod to container environment variable

I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
I would recommend instead of using the IP to use the DNS Name there's more info here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
But basically it's http://metadata-name.namespace.svc.cluster.local so in the case for that deployment it's http://frontend.default.svc.cluster.local
It's better this way because the local IP address can change.
You could use Pod field values for environment(ref: here). That way you can set POD IP in environment variable.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}