I have a folder in my project, which contains 1 properties file and 1 jar file(db-driver) file.
I need to copy both of these files to /usr/local/tomcat/lib directory on my pod. I am not sure how to achieve this in kubernetes yaml file. Below is my yaml file where I am trying to achieve this using configMap, but pod creation fails with error "configmap references non-existent config key: app.properties"
Target /usr/local/tomcat/lib already has other jar files so I am trying to use configMap to not override entire directory and just add 2 files which are specific to my application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: appvolume
mountPath: /usr/local/data
- name: config
mountPath: /usr/local/tomcat/lib
subPath: ./configuration
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
key: app.properties
Current Directory structure...
.
├── configuration
│ ├── app.properties
│ └── mysql-connector-java-5.1.21.jar
├── deployment.yaml
└── service.yaml
Please share your valuable feedback on how to achieve this.
Regards.
Please try this:
kubectl create configmap config-map --from-file=app.properties --from-file=mysql-connector-java-5.1.21.jar
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/conf
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
or
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat3
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/app.properties
subPath: app.properties
- name: config
mountPath: /usr/local/tomcat/lib/mysql-connector-java-5.1.21.jar
subPath: mysql-connector-java-5.1.21.jar
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
- key: mysql-connector-java-5.1.21.jar
path: mysql-connector-java-5.1.21.jar
it's normal to have this error because in this volume declaration you mentioned that key: app.properties otherwise in the configmap key: app.properties so here the key is key and the value is app.properties so you must in the volume declaration change :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
to :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: key
path: app.properties
for more you can refer here : add-configmap-data-to-a-volume
Related
My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.
try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/
I am attempting to install the APOC library to a Neo4j instance running within a Kubernetes cluster. The APOC plugin is install, I can see "apoc" entries available after running CALL dbms.procedures() all labeled as worksOnSystem / false. How can I enable them?
Current YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-db
namespace: prod
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: neo4j-db
template:
metadata:
labels:
app: neo4j-db
spec:
containers:
- image: neo4j
name: neo4j
env:
- name: NEO4J_dbms_security_procedures_unrestricted
value: apoc.\\\*
- name: NEO4JLABS_PLUGINS
value: \[\"apoc\"\]
ports:
- containerPort: 7474
name: http
- containerPort: 7687
name: bolt
- containerPort: 7473
name: https
volumeMounts:
- name: neo4j-data
mountPath: /data
- name: neo4j-plugins
mountPath: /plugins
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-prod-gid-pvc
- name: neo4j-plugins
persistentVolumeClaim:
claimName: neo4j-prod-plugin-gid-pvc
I don't see an option to mount a configMap as volume in the statefulset , as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulset-v1-apps only PVC can be associated with "StatefulSet" . But PVC does not have option for configMaps.
Here is a minimal example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: nginx:stable-alpine
volumeMounts:
- mountPath: /config
name: example-config
volumes:
- name: example-config
configMap:
name: example-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
a: "1"
b: "2"
In the container, you can find the files a and b under /config, with the contents 1 and 2, respectively.
Some explanation:
You do not need a PVC to mount the configmap as a volume to your pods. PersistentVolumeClaims are persistent drives, which you can read from/write to. An example for their usage is a database, such as Postgres.
ConfigMaps on the other hand are read-only key-value structures that are stored inside Kubernetes (in its etcd store), which are to store the configuration for your application. Their values can be mounted as environment variables or as files, either individually or altogether.
I have done it in this way.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
namespace: default
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
labels:
component: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
component: rabbitmq
template:
metadata:
labels:
component: rabbitmq
spec:
initContainers:
- name: "rabbitmq-config"
image: busybox:1.32.0
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
volumes:
- name: rabbitmq-config
configMap:
name: rabbitmq-configmap
optional: false
items:
- key: enabled_plugins
path: "enabled_plugins"
- name: rabbitmq-config-rw
emptyDir: {}
containers:
- name: rabbitmq
image: rabbitmq:3.8.5-management
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
- name: RABBITMQ_DEFAULT_VHOST
value: "vhost"
ports:
- containerPort: 15672
name: ui
- containerPort: 5672
name: api
volumeMounts:
- name: rabbitmq-data-pvc
mountPath: /var/lib/rabbitmq/mnesia
volumeClaimTemplates:
- metadata:
name: rabbitmq-data-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
component: rabbitmq
ports:
- protocol: TCP
port: 15672
targetPort: 15672
name: ui
- protocol: TCP
port: 5672
targetPort: 5672
name: api
type: ClusterIP
Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}
I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.