Kubernetes pod definition file issue - kubernetes

I can run the Docker container for wso2 ei with following command.
docker run -it -p 8280:8280 -p 8243:8243 -p 9443:9443 -v wso2ei:/home/wso2carbon --name integrator wso2/wso2ei-integrator
I'm trying to create the pod definition file for the same. I don't know how to do port mapping and volume mapping in pod definition file. The following is the file I have created up to now. How can I complete the rest?
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator

Here is YAML content which might work:
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator
ports:
- containerPort: 8280
volumeMounts:
- mountPath: /wso2carbon
name: wso2ei
volumes:
- name: wso2ei
hostPath:
# directory location on host
path: /home/wso2carbon
While the above YAML content is just a basic example, it's not recommended for production usage because of two reasons:
Use deployment or statefulset or daemonset instead of pods directly.
hostPath volume is not sharable between nodes. So use external volumes such as NFS or Block and mount it into the pod. Also look at dynamic volume provisioning using storage class.

Related

Apache Flink Kubernetes operator - how to override the Docker entrypoint?

I tried to override the container entry point of a Flink application in a Dockerfile, but it looks like that the Apache Flink kubernetes operator ignores it.
The Dockerfile is the following:
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
COPY custom-docker-entrypoint.sh /
RUN chmod a+x /custom-docker-entrypoint.sh
COPY --chown=flink:flink --from=build /target/*.jar /opt/flink/flink-web-upload/
ENTRYPOINT ["/custom-docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The definition of the FlinkDeployment uses the new image:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
image: "flink-example:0.1.0"
#...
In the description of the pod
kubectl describe pod flink-example
I see the following output:
Containers:
flink-main-container:
Command:
/docker-entrypoint.sh
I also tried to define the custom-docker-entrypoint.sh in the main container's command:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
flinkVersion: v1_14
image: "flink-example:0.1.0"
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
# command: [ 'sh','-c','/custom-docker-entrypoint.sh' ]
command: [ "/custom-docker-entrypoint.sh" ]
Thank you.
You can overwrite it via:
flinkConfiguration:
kubernetes.entry.path: "/custom-docker-entrypoint.sh"
The Operator (by default) uses Flink's native Kubernetes integration. See: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-entry-path

How to mount one file from a secret to a container?

I'm trying to mount a secret as a file
apiVersion: v1
data:
credentials.conf: >-
dGl0bGU6IHRoaYWwpCg==
kind: Secret
metadata:
name: address-finder-secret
type: Opaque
kind: DeploymentConfig
apiVersion: v1
metadata:
name: app-sample
spec:
replicas: 1
selector:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
volumes:
- name: app-sample-vol
configMap:
name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
containers:
- name: app-sample
volumeMounts:
- mountPath: /config
name: app-sample-vol
- mountPath: ./secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
I need to add the credentials.conf file to a directory where there are already other files. I'm trying to use subPath, but I get 'Error: failed to create subPath directory for volumeMount "secret" of container "app-sample"'
If I remove the subPath, I will lose all other files in the directory.
Where did I go wrong?
Hello, Hope you are enjoying your kubernetes journey !
It would have been better if you have given your image name to try it out but however, i decided to create on custom image.
I created a simple file named file1.txt and copied it to the image, here is my dockefile:
FROM nginx
COPY file1.txt /secret/
I built it simply with:
❯ docker build -t test-so-mount-file .
I just checked if my file were here before going further:
❯ docker run -it test-so-mount-file bash
root#1c9cebc4884c:/# ls
bin etc mnt sbin usr
boot home opt secret var
dev lib proc srv
docker-entrypoint.d lib64 root sys
docker-entrypoint.sh media run tmp
root#1c9cebc4884c:/# cd secret/
root#1c9cebc4884c:/secret# ls
file1.txt
root#1c9cebc4884c:/secret#
perfect. now lets deploy it on kubernetes.
For this test, since i'm using kind (kubernetes in docker) I just used this command to upload my image to the cluster:
❯ kind load docker-image test-so-mount-file --name so-cluster-1
It seems that you are deploying on openshift regarding to your "Deploymentconfig" kind. however, once my image has been added to my cluster i modified your deployment with it :
first without volumes, to check that the file1 is in the container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
# volumeMounts:
yes it is:
❯ k exec -it app-sample-7b96558fdf-hn4qt -- ls /secret
file1.txt
Before going further, when i tried to deploy your secret i got this:
Error from server (BadRequest): error when creating "manifest.yaml": Secret in version "v1" cannot be handled as a Secret: illegal base64 data at input byte 20
This is linked to your base64 string that actually contains illegal base64 data, here it is:
❯ base64 -d <<< "dGl0bGU6IHRoaYWwpCg=="
title: thi���(base64: invalid input
No pb, I used another string in b64:
❯ base64 <<< test
dGVzdAo=
and added it to the secret. since I want this data to be in a file, i replaced the '>-' by a '|-' (What is the difference between '>-' and '|-' in yaml?),however it works, with or without it.
Now, lets add the secret to our deployment. I replaced the "./secret/credentials.conf" by "/secret/credentials.conf" (it works with or without but i prefer to remove the "."). since I don't have your confimap datas, I commented out this part. However, Here is the deployment manifest of my file manifest.yaml:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: address-finder-secret
data:
credentials.conf: |-
dGVzdAo=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
volumeMounts:
# - mountPath: /config
# name: app-sample-vol
- mountPath: /secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
volumes:
# - name: app-sample-vol
# configMap:
# name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
Lets deploy this:
❯ kaf manifest.yaml
secret/address-finder-secret created
deployment.apps/app-sample created
❯ k get pod
NAME READY STATUS RESTARTS AGE
app-sample-c45ff9d58-j92ct 1/1 Running 0 31s
❯ k exec -it app-sample-c45ff9d58-j92ct -- ls /secret
credentials.conf file1.txt
❯ k exec -it app-sample-c45ff9d58-j92ct -- cat /secret/credentials.conf
test
It worked perfectly, Since I havent modified big things from you manifest, i think the pb comes from the deploymentConfig, I suggest you to use deployment instead of deploymentConfig, That way it will works (I hope) and if someday you decide to migrate from openshift to another kubernetes cluster your manifest will be compatible.
bguess

Passing values from initContainers to container spec

I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl value as a helm value, which is the public IP address of the nginx-ingress pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer of my-gatekeeper-image we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

Can we create Multiple databases in Same Persistent Volume in kubernetes ?

I have Azure Machine (kubernetes) who have Agent of 2 core, 1 GB. My two services are running on this Machine each have its own Postgres (Deplyment, Service, Pv, PVC).
I want to host my third service too on same machine.
So when I tried to create Postgres Deployment (this too have its own service, PV, PVC) but Pod was stuck in status=ContainerCreating .
After some digging I got to know that my VM only Supports data-disks.
So i thought why not use PVC of earlier deployment in current service like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: third-postgres
labels:
name: third-postgres
spec:
replicas: 1
template:
metadata:
labels:
name: third-postgres
spec:
containers:
- name: third-postgres
image: postgres
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: third-user
- name: POSTGRES_PASSWORD
value: <password>
- name: POSTGRES_DB
value: third_service_db
ports:
- containerPort: 5432
volumeMounts:
- name: third-postgresdata
mountPath: /var/lib/postgresql/data
volumes:
- name: third-postgresdata
persistentVolumeClaim:
claimName: <second-postgres-data>
Now this Deployment was successfully running but it doesn't create new database third_service_db
May be because second PVC was already exists so it skips the Db create part ?
So is their any way I can use same PVC for my all services and same PVC can have multiple databases. So that when I run kubectl create -f <path-to-thirst-postgres.yaml> it takes name Database configuration from env Variables and create DB in same PVC
You have to create one PVC per Deployment. Once a PVC has been claimed, it must be released before it can be used again.
In the case of AzureDisk, the auto-created volumes can only be mounted by a single node (ReadWriteOnce access mode) so there's one more constraint: each of your Deployments can have at most 1 replica.
Yes you can create as much databas as you want on the same Persistent Volume. You have to change the path value to store different database. See the example below.
apiVersion: v1
kind: PersistentVolume
metadata:
name: ...
namespace: ...
labels:
type: ...
spec:
storageClassName: ...
capacity:
storage: ...
accessModes:
- ...
hostPath:
path: "/mnt/data/DIFFERENT_PATH_FOR_EACH_DATABASE"