Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}
Related
I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}
I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.
The error I get is:
30: line 1: /scripts/wrapper.sh: Permission denied
stream closed
The deployment is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler-grok-exporter
labels:
app: cluster-autoscaler-grok-exporter
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
template:
metadata:
labels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
securityContext:
runAsUser: 1001
fsGroup: 2000
serviceAccountName: flux
imagePullSecrets:
- name: id-docker
containers:
- name: get-data
# 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12
image: dtzar/helm-kubectl:3.5.0
command: ["sh", "-c", "/scripts/wrapper.sh"]
args:
- cluster-autoscaler
- "90"
# - cluster-autoscaler
- "30"
- /scripts/get_data.sh
- /logs/data.log
volumeMounts:
- name: logs
mountPath: /logs/
- name: scripts-volume-get-data
mountPath: /scripts/get_data.sh
subPath: get_data.sh
- name: scripts-wrapper
mountPath: /scripts/wrapper.sh
subPath: wrapper.sh
- name: export-data
image: ippendigital/grok-exporter:1.0.0.RC3
imagePullPolicy: Always
ports:
- containerPort: 9148
protocol: TCP
volumeMounts:
- name: grok-config-volume
mountPath: /grok/config.yml
subPath: config.yml
- name: logs
mountPath: /logs
volumes:
- name: grok-config-volume
configMap:
name: grok-exporter-config
- name: scripts-volume-get-data
configMap:
name: get-data-script
defaultMode: 0777
defaultMode: 0700
- name: scripts-wrapper
configMap:
name: wrapper-config
defaultMode: 0777
defaultMode: 0700
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cluster-autoscaler-grok-exporter-sidecar
labels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
type: ClusterIP
ports:
- name: metrics
protocol: TCP
targetPort: 9144
port: 9148
selector:
sidecar: cluster-autoscaler-grok-exporter-sidecar
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: cluster-autoscaler-grok-exporter
app.kubernetes.io/part-of: grok-exporter
name: cluster-autoscaler-grok-exporter
spec:
endpoints:
- port: metrics
selector:
matchLabels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
From what I can see, your script does not have execute permissions.
Remove this line from your config map.
defaultMode: 0700
Keep only:
defaultMode: 0777
Also, I see missing leading / in your script path
- /bin/sh scripts/get_data.sh
So, change it to
- /bin/sh /scripts/get_data.sh
I'm learning how to use Kubernetes by using Minikube and creating a Drupal site.
I was able to minikube service my drupal site and reach up to the "Set up Database" page and that's about it. It keeps telling me I need to insert the correct info. I checked my MYSQL pod and was able to exec and MySQL into my database.
I'm not sure what I'm missing? Is MYSQL pod not connected to my Drupal pod?
Here's my drupal-mysql.yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: drupal-mysql-service
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
protocol: TCP
selector:
app: drupal
type: ClusterIP
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: drupal-mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: seto_password
- name: MYSQL_DATABASE
value: drupal_databases
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- name: vol-drupal
mountPath: /var/lib/mysql
subPath: 'mysql'
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-seto-mysql
Here's my drupal.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal
labels:
app: drupal
spec:
selector:
matchLabels:
app: drupal
tier: frontend
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: drupal
tier: frontend
spec:
selector:
initContainers:
- name: init-sites-volume
image: drupal:8.9.11
command: ['/bin/bash', '-c']
args:
[
'cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R',
]
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:8.9.11
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-seto
And Here's my drupal-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: drupal-service
labels:
app: drupal
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
selector:
app: drupal
You need to connect between containers using IP address of containers or may be pods, but what is point of getting database in container, instead install mysql in host machine I have done it, checkout the guide
https://www.youtube.com/watch?v=7Y4RJrk-cFw
I am attempting to install the APOC library to a Neo4j instance running within a Kubernetes cluster. The APOC plugin is install, I can see "apoc" entries available after running CALL dbms.procedures() all labeled as worksOnSystem / false. How can I enable them?
Current YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-db
namespace: prod
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: neo4j-db
template:
metadata:
labels:
app: neo4j-db
spec:
containers:
- image: neo4j
name: neo4j
env:
- name: NEO4J_dbms_security_procedures_unrestricted
value: apoc.\\\*
- name: NEO4JLABS_PLUGINS
value: \[\"apoc\"\]
ports:
- containerPort: 7474
name: http
- containerPort: 7687
name: bolt
- containerPort: 7473
name: https
volumeMounts:
- name: neo4j-data
mountPath: /data
- name: neo4j-plugins
mountPath: /plugins
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-prod-gid-pvc
- name: neo4j-plugins
persistentVolumeClaim:
claimName: neo4j-prod-plugin-gid-pvc
I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.