Kubernetes: Pod is not created after applying deployment - kubernetes

I have a problem with Kubernetes on my local machine. I want to create a pod with a database so I prepared a deployment file with service.
apiVersion: v1
kind: Service
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: bid-service-db
tier: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
selector:
matchLabels:
app: bid-service-db
strategy:
type: Recreate
template:
metadata:
labels:
app: bid-service-db
tier: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: postgres
image: postgres:13
imagePullPolicy: Never
name: bid-service-db
ports:
- containerPort: 5432
name: bid-service-db
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres-persistance-storage
persistentVolumeClaim:
claimName: bid-service-db-volume
status: {}
I am applying this file with k apply -f bid-db-deployment.yaml. k get all returns that only service was created, but pod not started. What can I do in this case? How to troubleshoot that?

if you didn't get any errors on the 'apply' you can get the failure reason by:
kubectl describe deployment/DEPLOMENT_NAME
Also, you can take only the deployment part and put it in a separate YAML file and see if you get errors.

Since after restart the cluster it worked for you, a good ideia next times should be verify the logs from kube-api and kube-controller pods using the command:
kubectl logs pn kube-system <kube-api/controller_pod_name>
To get the list of your deployments in all name space you can use the command:
kubectl get deployments -A

Related

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty)

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

How to access mysql pod in another pod(busybox)?

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

unable to mount a specific directory from couchdb pod kubernetes

Hi I am trying to mount a directory from pod where couchdb is running . directory is /opt/couchdb/data and for mounting in kubernetes I am using this config for deployment .
apiVersion: v1
kind: Service
metadata:
name: couchdb0-peer0org1
spec:
ports:
- port: 5984
targetPort: 5984
type: NodePort
selector:
app: couchdb0-peer0org1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchdb0-peer0org1
spec:
selector:
matchLabels:
app: couchdb0-peer0org1
strategy:
type: Recreate
template:
metadata:
labels:
app: couchdb0-peer0org1
spec:
containers:
- image: hyperledger/fabric-couchdb
imagePullPolicy: IfNotPresent
name: couchdb0
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
ports:
- containerPort: 5984
name: couchdb0
volumeMounts:
- name: datacouchdbpeer0org1
mountPath: /opt/couchdb/data
subPath: couchdb0
volumes:
- name: datacouchdbpeer0org1
persistentVolumeClaim:
claimName: worker1-incoming-volumeclaim
so by applying this deployments . I always gets result for the pods .
couchdb0-peer0org1-b89b984cf-7gjfq 0/1 CrashLoopBackOff 1 9s
couchdb0-peer0org2-86f558f6bb-jzrwf 0/1 CrashLoopBackOff 1 9s
But now the strange thing if I changed mounted directory from /opt/couchdb/data to /var/lib/couchdb then it works fine . But the issue is that I have to store the data for couchdb database in statefull manner .
Edit your /etc/exports with following content
"path/exported/directory *(rw,sync,no_subtree_check,no_root_squash)"
and then restart NFS server:
sudo /etc/init.d/nfs-kernel-server restart*
no_root_squash is used, remote root users are able to change any file on the shared file. This a quick solution but have some security concerns

Gitlab CI - K8s - Deployment

just going through this guide on gitlab and k8s gitlab-k8s-cd, but my build keeps failing on this part:
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables.
This is the error:
$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
Any help would be much appreciated thanks.
EDIT
Since the initial post, by removing the initial kubectl delete secret and re-building worked, so it was failing on deleting when there was no previous secret.
Second Edit
Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:
error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
And this error:
error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
Latest YAML
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80
Regarding your first error:
Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:
ports:
- containerPort: 8080
hostPort: 80
See the reference for more information.
Regarding your second error:
According to the reference on PodSpecs, the imagePullSecrets property is correctly placed in your example. However, from reading the error message, it seems that you actually included the imagePullSecrets property into the ContainerSpec, not the PodSpec.
The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the imagePullSecrets property more than necessary.
This is the working YAML file for K8s:
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:latest
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
imagePullSecrets:
- name: registry.gitlab.com
This is the working gitlab-ci file also:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- package
- deploy
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/<project>/<app> .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/<project>/<app>
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone <zone>
- gcloud config set project <project>
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials <container-name>
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<username> --docker-password=$REGISTRY_PASSWD --docker-email=<user-email>
- kubectl apply -f deployment.yml
Just need to work out how to alter the script to allow for rolling back.

Categories