error when creating "deployment.yaml", Deployment in version "v1" cannot be handled as a Deployment - kubernetes

I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
I ran this line on cmd in the folder container:
kubectl create -f deployment-ghost.yaml --validate=false
service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?

{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},
Your spec has error:
...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...

Related

How to apply the imported realm configuration file of keycloak when deploying on k8s

My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.
try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/

Cannot connect to my MiniKube external service ip/port?

I have a mongo yaml and web-app(NodeJS) yaml set up like this:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 8080
targetPort: 27017
and the webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
# default ClusterIP
# nodeport = external service
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
I ran the commands for each file
kubectl apply -f
i checked the status of the webapp which returned:
app listening on port 3000!
I got the IP address by
minikube ip
and the port was 30100
Why cannot not I access this web app?
I get a site cant be reached error.
If you are on Mac, check your minikube driver. I had to stop, delete minikube, then restart while specifying the hyperkit driver like so.
minikube stop
minikube delete
docker start --vm-driver=hyperkit
The information listed here is pretty useful too.

kubernetes Deployment PodName setting

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
I want to set the name of the pod Because it communicates internally based on the name of the pod
but when a pod is created, it cannot be used because the variable name is appended to the end.
How can I set the pod name?
It's better if you use, statefulset instead of deployment. Statefulset's pod name will be like <statefulsetName-0>,<statefulsetName-1>... And you will need a clusterIP service. with which you can bound your pods. see the doc for more details. Ref
apiVersion: v1
kind: Service
metadata:
name: test-svc
labels:
app: test
spec:
ports:
- port: 8080
name: web
clusterIP: None
selector:
app: test
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-StatefulSet
labels:
app: test
spec:
replicas: 1
serviceName: test-svc
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
Here, The pod name will be like this test-StatefulSet-0.
if you are using the kind: Deployment it won't be possible ideally in this scenario you can use kind: Statefulset.
Instead of POD to POD communication, you can use the Kubernetes service for communication.
Still, statefulset manage the pod name in the sequence
statefulsetname - 0
statefulsetname - 1
statefulsetname - 2
You can't.
It is the property of the pods of a Deployment that they do not have an identity associated with them.
You could have a look at Statefulset instead of a Deployment if you want the pods to have a state.
From the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
So, if you have a Statefulset object named myapp with two replicas, the pods will be named as myapp-0 and myapp-1.

Connecting to GKE POD running Postgres with client Postico 2

I want to connect to a Postgres instance that it is in a pod in GKE.
I think a way to achieve this can be with kubectl port forwarding.
In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical
secrets.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: staging
name: postgres-secrets
type: Opaque
data:
MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: staging
name: db-data-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql/data"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: staging
name: db-data-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: staging
labels:
app: postgres-db
name: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:12.4
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-db
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_PASSWORD
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: db-data-pvc
svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: staging
labels:
app: postgres-db
name: postgresdb-service
spec:
type: ClusterIP
selector:
app: postgres-db
ports:
- port: 5432
and it seems that everything is working
Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws
FATAL: role "myappuserdb" does not exist
UPDATE 1
This is from GKE YAML
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_NAME
name: postgres-secrets
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_USERNAME
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_PASSWORD
name: postgres-secrets
UPDATE 2
I will explain what happened and how I solve this.
The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.
After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.
I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.
in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.
but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>
please try this one
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME

How can I get the deployment name from within my container?

I'm trying to set the deployment name as environment variable using the downward API but my container keeps crashing without any logging. I'm using the busybox to print the environment variables. I've had success using a Pod but no luck with a Deployment: This is my YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
containers:
-
image: busybox
name: test-d-container
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
What am I missing?
Update:
It is clear that my indentation was messed up, thank you for pointing that out but the main part of my question is still not clear to me. How do I get the deployment name from within my container?
You are using the wrong indentation and structure for Deployment objects.
Both the command key and the env key are part of the container key.
This is the right format
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
containers:
- image: busybox
name: test-d-container
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
Remember that you can validate your Kubernetes manifests using this online validator, or locally using kubeval.
Referring to the main part of the question, you can get the object that created the Pod, but most likely that will be the ReplicaSet, not the Deployment.
The Pod name is normally generated by Kubernetes, you don't know it before hand, that's why there is a mechanism to get the name. But that is not the case for Deployments: you know the name of Deployments when creating them. I don't think there is a mechanism to get the Deployment name dynamically.
Typically, labels are used in the PodSpec of the Deployment object to add metadata.
You could also try to parse it, since the pod name (which you have) has always this format: deploymentName-replicaSetName-randomAlphanumericChars.
I'm not seeing any direct solution to get the deployment name from container. The workaround that I'm using is with the help of pod labels,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
template:
metadata:
labels:
app: app1-deployment
spec:
containers:
- name: container1
image: nginx
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
...
...
I'm using app as label name but deployment-name could also be better naming convention for this