Mounting a SharedVolumeClaim spawns two Pods - kubernetes

I want to share files between two containers in Kubernetes. Therefore I created a SharedVolumeClaim, which I plan to mount in both containers. But for a start, I tried to mount it in one container only. Whenever I deploy this, two Pods get created.
$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf 0/1 ContainerCreating 0 18h
dev-frontend-b4959fb97-f9mgw 1/1 Running 0 18h
The first container is stuck in creating because it can not access the SharedVolume (Volume is already attached by pod fresh-namespace/dev-frontend-b4959fb97-f9mgw).
But when I remove the code that mounts the volume into the container and deploy again, only one container is created. This also happens if I start with a completely new namespace.
$ kubectl get pods | grep frontend
dev-frontend-587bc7f359-7ozsx 1/1 Running 0 5m
Why does the mount spawn a second pod as it should only be one?
Here are the relevant parts of the deployment files:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: shared-file-pvc
labels:
app: frontend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-frontend
labels:
app: frontend
environment: dev
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: frontend
environment: dev
template:
metadata:
labels:
app: frontend
environment: dev
spec:
volumes:
- name: shared-files
persistentVolumeClaim:
claimName: shared-file-pvc
containers:
- name: frontend
image: some/registry/frontend:version
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: 100Mi
cpu: 250m
limits:
memory: 300Mi
cpu: 750m
volumeMounts:
- name: shared-files <!-- works if I remove -->
mountPath: /data/shared-files <!-- this two lines -->
---
Can anybody help me, what I am missing here?
Thanks in advance!

If you check exact names of your deployed Pods, you can see that these Pods are managed by two diffrent ReplicaSets (dev-frontend-84ca5d6dd6 and dev-frontend-b4959fb97):
$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf 0/1 ContainerCreating 0 18h
dev-frontend-b4959fb97-f9mgw 1/1 Running 0 18h
Your Deployment is using Rolling Deployment as default Deployment Strategy. A Rolling Deployment waits for new Pods to become ready BEFORE scaling down previous Pods.
I assume that first you deployed dev-frontend-b4959fb97-f9mgw (old version of the app) and then you deployed dev-frontend-84ca5d6dd6-8n8lf (new version of the same app).
The newly created Pod (dev-frontend-84ca5d6dd6-8n8lf) is not able to attach volume (the old one still uses this volume) so it will never be in Ready state.
Check how many dev-frontend-* ReplicaSets you have in this particular namespaces and which one is currently managed by your Deployment:
$ kubectl get rs | grep "dev-frontend"
$ kubectl describe deployment dev-frontend | grep NewReplicaSet
This two commands above should give you the idea which one is new and which one is old.
I am not sure which StorageClass you are using, but you can check it using:
$ kubectl describe pvc shared-file-pvc | grep "StorageClass"
Additionally check if created PV really support RWX Access Mode:
$ kubectl describe pv <PV_name> | grep "Access Modes"
Main concept is that PersistentVolume and PersistentVolumeClaim are mapped one-to-one. This is described in Persistent Volume - Binding documentation.
However, if your StorageClass is configured correctly, it is possible to have two or more containers using the same PersistentVolumeClaim that is attached to the same mount on the back end.
The most common examples are NFS and CephFS

Related

microk8s-hostpath does not create PV for a claim

I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.
FYI:
I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.
Yaml files:
# =================== pvc.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath
# =================== deployment.yaml (just spec section)
spec:
serviceName: registry
replicas: 1
selector:
matchLabels:
io.kompose.service: registry
template:
metadata:
labels:
io.kompose.service: registry
spec:
containers:
- image: {{ .Values.image }}
name: registry-master
ports:
- containerPort: 28015
- containerPort: 29015
- containerPort: 8080
resources:
requests:
cpu: {{ .Values.request_cpu }}
memory: {{ .Values.request_memory }}
limits:
cpu: {{ .Values.limit_cpu }}
memory: {{ .Values.limit_memory }}
volumeMounts:
- mountPath: /data
name: rdb-local-data
env:
- name: RUN_ENV
value: 'kubernetes'
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: rdb-local-data
persistentVolumeClaim:
claimName: wws-registry-claim
Cluster info:
$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
container-registry registry-claim Bound pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX microk8s-hostpath 56m
default wws-registry-claim Pending registry-pvc 0 microk8s-hostpath 23m
$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 56m
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9b8997588-vk5vt 1/1 Running 0 57m
hostpath-provisioner-7b9cb5cdb4-wxcp6 1/1 Running 0 57m
metrics-server-v0.2.1-598c8978c-74krr 2/2 Running 0 57m
tiller-deploy-77855d9dcf-4cvsv 1/1 Running 0 46m
$ kubectl -n kube-system logs hostpath-provisioner-7b9cb5cdb4-wxcp6
I0322 12:31:31.231110 1 controller.go:293] Starting provisioner controller 87fc12df-8b0a-11eb-b910-ee8a00c41384!
I0322 12:31:31.231963 1 controller.go:893] scheduleOperation[lock-provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.235618 1 leaderelection.go:154] attempting to acquire leader lease...
I0322 12:31:31.237785 1 leaderelection.go:176] successfully acquired lease to provision for pvc container-registry/registry-claim
I0322 12:31:31.237841 1 controller.go:893] scheduleOperation[provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.239011 1 hostpath-provisioner.go:86] creating backing directory: /var/snap/microk8s/common/default-storage/container-registry-registry-claim-pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca
I0322 12:31:31.239102 1 controller.go:627] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" created
I0322 12:31:31.244798 1 controller.go:644] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" saved
I0322 12:31:31.244813 1 controller.go:680] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" provisioned for claim "container-registry/registry-claim"
I0322 12:31:33.243345 1 leaderelection.go:196] stopped trying to renew lease to provision for pvc container-registry/registry-claim, task succeeded
$ kubectl get sc
NAME PROVISIONER AGE
microk8s-hostpath microk8s.io/hostpath 169m
$ kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
creationTimestamp: "2021-03-22T12:31:25Z"
name: microk8s-hostpath
resourceVersion: "2845"
selfLink: /apis/storage.k8s.io/v1/storageclasses/microk8s-hostpath
uid: e94b5653-e261-4e1f-b646-e272e0c8c493
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Microk8s inspect:
$ microk8s.inspect
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
WARNING: Docker is installed.
Add the following lines to /etc/docker/daemon.json:
{
"insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
Report tarball is at /var/snap/microk8s/1671/inspection-report-20210322_143034.tar.gz
I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.
Now my PVC is:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
# volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath

assign memory resources to a running pod?

i would want to know how can i assign memory resources to a running pod ?
i tried kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml
but when i run the command kubectl apply -f pod.yaml
The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
thanks in advance for your help .
Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.
I suggest you to use deployment to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...
You can define the resources in your deployment file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
As #KoopaKiller mentioned, you can't update spec.containers.resources field, this is mentioned in Container object spec:
Compute Resources required by this container. Cannot be updated.
Instead you can deploy your Pods using Deployment object. In that case if you change resources config for your Pods, deployment controller will roll out updated versions of your Pods.

How to mount a persistent volume on a Deployment/Pod using PersistentVolumeClaim?

I am trying to mount a persistent volume on pods (via a deployment).
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: ...
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
However, the pod stays in "ContainerCreating" status and the events show the following error message.
Unable to mount volumes for pod "podname": timeout expired waiting for volumes to attach or mount for pod "namespace"/"podname". list of unmounted volumes=[volume]. list of unattached volumes=[volume]
I verified that the persistent volume claim is ok and bound to a persistent volume.
What am I missing here?
When you create a PVC without specifying a PV or type of StorageClass in GKE clusters it will fall back to default option:
StorageClass: standard
Provisioner: kubernetes.io/gce-pd
Type: pd-standard
Please take a look on official documentation: Cloud.google.com: Kubernetes engine persistent volumes
There could be a lot of circumstances that can produce error message encountered.
As it's unknown how many replicas are in your deployment as well as number of nodes and how pods were scheduled on those nodes, I've tried to reproduce your issue and I encountered the same error with following steps (GKE cluster was freshly created to prevent any other dependencies that might affect the behavior).
Steps:
Create a PVC
Create a Deployment with replicas > 1
Check the state of pods
Additional links
Create a PVC
Below is example YAML definition of a PVC the same as yours:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
After applying above definition please check if it created successfully. You can do it by using below commands:
$ kubectl get pvc volume-claim
$ kubectl get pv
$ kubectl describe pvc volume-claim
$ kubectl get pvc volume-claim -o yaml
Create a Deployment with replicas > 1
Below is example YAML definition of deployment with volumeMounts and replicas > 1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
selector:
matchLabels:
app: ubuntu
replicas: 10 # amount of pods must be > 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
Apply it and wait for a while.
Check the state of pods
You can check the state of pods with below command:
$ kubectl get pods -o wide
Output of above command:
NAME READY STATUS RESTARTS AGE IP NODE
ubuntu-deployment-2q64z 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-4tjp2 1/1 Running 0 4m27s 10.56.1.14 gke-node-2
ubuntu-deployment-5tn8x 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-5tn9m 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-6vkwf 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-9p45q 1/1 Running 0 4m27s 10.56.1.12 gke-node-2
ubuntu-deployment-lfh7g 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-qxwmq 1/1 Running 0 4m27s 10.56.1.13 gke-node-2
ubuntu-deployment-r7k2k 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-rnr72 0/1 ContainerCreating 0 4m27s <none> gke-node-3
Take a look on above output:
3 pods are in Running state
7 pods are in ContainerCreating state
All of the Running pods are located on the same gke-node-2
You can get more detailed information why pods are in ContainerCreating state by:
$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE
The Events part in above command shows:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1
Warning FailedAttachVolume 14m attachdetach-controller Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2
Warning FailedMount 92s (x6 over 12m) kubelet, gke-node-1 Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj]
Pod cannot pass ContainerCreating state because of failed mounting of a volume. Mentioned volume is already used by other pods on a different node.
ReadWriteOnce: The Volume can be mounted as read-write by a single node.
Additional links
Please take a look at: Cloud.google.com: Access modes of persistent volumes.
There is detailed answer on topic of access mode: Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume
As it's unknown what you are trying to achieve please take a look on comparison between Deployments and Statefulsets: Cloud.google.com: Persistent Volume: Deployments vs statefulsets.
If doing this in a cloud provider, the storageClass object will create the respective volume for your persistent volume claim.
If you are trying to do this locally on minikube or in a self managed kubernetes cluster, you need to manually create the storageClass that will provide the volumes for you, or create it manually like this example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The hostPath variable will mount this data in the current pod node.

No way to apply change to Deployment binded with ReadWriteOnce PV in gCloud Kubernetes engine?

As the GCE Disk does not support ReadWriteMany , I have no way to apply change to Deployment but being stucked at ContainerCreating with FailedAttachVolume .
So here's my setting:
1. PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
2. Service
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
3. Deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql/mysql-server
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /mysql-data
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Which these are all fine for creating the PVC, svc and deployment. Pod and container started successfully and worked as expected.
However when I tried to apply change by:
kubectl apply -f mysql_deployment.yaml
Firstly, the pods were sutcked with the existing one did not terminate and the new one would be creating forever.
NAME READY STATUS RESTARTS AGE
mysql-nowhash 1/1 Running 0 2d
mysql-newhash 0/2 ContainerCreating 0 15m
Secondly from the gCloud console, inside the pod that was trying to create, I got two crucial error logs:
1 of 2 FailedAttachVolume
Multi-Attach error for volume "pvc-<hash>" Volume is already exclusively attached to one node and can't be attached to another FailedAttachVolume
2 of 2 FailedMount
Unable to mount volumes for pod "<pod name and hash>": timeout expired waiting for volumes to attach/mount for pod "default"/"<pod name and hash>". list of unattached/unmounted volumes=[mysql-persistent-storage]
What I could immediately think of is the ReadWriteOnce capability of gCloud PV. Coz the kubernetes engine would create a new pod before terminating the existing one. So under ReadWriteOnce it can never create a new pod and claim the existing pvc...
Any idea or should I use some other way to perform deployment updates?
appreciate for any contribution and suggestion =)
remark: my current work-around is to create an interim NFS pod to make it like a ReadWriteMany pvc, this worked but sounds stupid... requiring an additional storage i/o overhead to facilitate deployment update ?.. =P
The reason is that if you are applying UpdateStrategy: RollingUpdate (as it is default) k8s waits for the new Container to become ready before shutting down the old one. You can change this behaviour by applying UpdateStrategy: Recreate
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy

How to update a container only?

I am working locally with minikube and every time i make a change on the code, i delete the service (and the deployment) and create a new one.
This operation generate a new IP for each container so i also need to update my frontend, and also to insert new data in my db container, since i loose every data every time i delete the service.
It’s way too much wasted time to work efficiently.
I would like to know if there is a way to update a container without generating new IPs, and without deleting the pod (because i don't want to delete my db container everytime i update the backend code)?
It's easy to update the existing Deployment with a new image without necessity to delete it.
Imagine we have a YAML file with the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
To run this deployment, run the following command:
$ kubectl create -f nginx-deployment.yaml --record
(--record - appends the current command to the annotations of the created or updated resource. This is useful for future reviews, such as investigating which commands were executed in each Deployment revision, and for making a rollback.)
To see the Deployment rollout status, run
$ kubectl rollout status deployment/nginx-deployment
To update nginx image version, just run the command:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Or you can edit existing Deployment with the command:
$ kubectl edit deployment/nginx-deployment
To see the status of the Deployment update process, run the command:
$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
or
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
Each time you update the Deployment, it updates the Pods by creating new ReplicaSet, scaling it to 3 replicas, and scaling down old ReplicaSet to 0. If you update the Deployment again during the previous update in progress, it starts to create new ReplicaSet immediately, without waiting for completion of the previous update.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1180356465 3 3 3 4s
nginx-deployment-2538420311 0 0 0 56s
If you made a typo while editing the Deployment (for example, nginx:1.91) you can rollback it to the previous good version.
First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.
To see the details of each revision, run:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
Now you can rollback to the previous version using command:
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
Or you can rollback to a specific version:
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
For more information, please read the part of Kubernetes documentation related to Deployment
First of all, in your front-end, use DNS names instead of IP addresses to reach your backend. This will save you from rebuilding your front-end app every time you deploy your backend.
That being said, there is no need to delete your service just to deploy a new version of your backend. In fact, you just need to update your deployment, making it refer to the new docker image you have built using the latest code for your backend.
Finally, as long as I understand, you have both your application and your database inside the same Pod. This is not a good practice, you should separate them in order not to cause a downtime in your database when you deploy a new version of your code.
As a sidenote, not sure if this is the case, but if you are using minikube as a development environment you're probably doing it wrong. You should use docker alone with volume binding, but that's out of scope of your question.
Use kops and create a production like cluster in AWS on the free tier.
In order to fix this you need to make sure you use a loadbalancer for your frontends. Create a service for your db container exposing the port so your frontends can reach it, and put that in your manifest for your frontends so its static. Service discovery will take care of the ip address and your containers will automatically connect to the ports. You can also setup persistent storage for your DBs. When you update your frontend code, use this to update your containers so nothing will change.
kubectl set image deployment/helloworld-deployment basicnodeapp=buildmystartup/basicnodeapp:2
Here is how I would do a state-full app in production AWS using wordpress for an example.
###############################################################################
#
# Creating a stateful app with persistent storage and front end containers
#
###############################################################################
* Here is how you create a stateful app using volumes and persistent storage for production.
* To start off we can automate the storage volume creation for our mysql server with a storage object and persistent volume claim like so:
$ cat storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
$ cat pv-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
* Lets go ahead and create these so they are ready for our deployment of mysql
$ kubectl create -f storage.yml
storageclass "standard" created
$ kubectl create -f pv-claim.yml
persistentvolumeclaim "db-storage" created
* Lets also create our secrets file that will be needed for mysql and wordpress
$ cat wordpress-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: wordpress-secrets
type: Opaque
data:
db-password: cGFzc3dvcmQ=
# random sha1 strings - change all these lines
authkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OA==
loggedinkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OQ==
secureauthkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MQ==
noncekey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MA==
authsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mg==
secureauthsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mw==
loggedinsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NA==
noncesalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NQ==
$ kubectl create -f wordpress-secrets.yml
* Take note of the names we assigned. We will need these for the mysql deployment
* We created the storage in us-east-1b so lets set a node label for our node in that AZ so our deployment is pushed to that node and can attach our volume.
$ kubectl label nodes ip-172-20-48-74.ec2.internal storage=mysql
node "ip-172-20-48-74.ec2.internal" labeled
* Here is our mysql pod definition. Notice at the bottom we use a nodeSelector
* We will need to use that same one for our deployment so it can reach us-east-1b
$ cat wordpress-db.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-db
spec:
replicas: 1
selector:
app: wordpress-db
template:
metadata:
name: wordpress-db
labels:
app: wordpress-db
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-storage
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
nodeSelector:
storage: mysql
* Before we go on to the deployment lets expose a service on port 3306 so wordpress can connect.
$ cat wordpress-db-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress-db
spec:
ports:
- port: 3306
protocol: TCP
selector:
app: wordpress-db
type: NodePort
$ kubectl create -f wordpress-db-service.yml
service "wordpress-db" created
* Now lets work on the deployment. We are going to use EFS to save all our pictures and blog posts so lets create that on us-east-1b also
* So first lets create our EFS NFS share
$ aws efs create-file-system --creation-token 1
{
"NumberOfMountTargets": 0,
"SizeInBytes": {
"Value": 0
},
"CreationTime": 1501863105.0,
"OwnerId": "812532545097",
"FileSystemId": "fs-55ed701c",
"LifeCycleState": "creating",
"CreationToken": "1",
"PerformanceMode": "generalPurpose"
}
$ aws efs create-mount-target --file-system-id fs-55ed701c --subnet-id subnet-7405f010 --security-groups sg-ffafb98e
{
"OwnerId": "812532545097",
"MountTargetId": "fsmt-a2f492eb",
"IpAddress": "172.20.53.4",
"LifeCycleState": "creating",
"NetworkInterfaceId": "eni-cac952dd",
"FileSystemId": "fs-55ed701c",
"SubnetId": "subnet-7405f010"
}
* Before we launch the deployment lets make sure our mysql server is up and connected to the volume we created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
db-storage Bound pvc-82c889c3-7929-11e7-8ae1-02fa50f1a61c 8Gi RWO standard 51m
* ok status bound means our container is connected to the volume.
* Now lets launch the wordpress frontend of two replicas.
$ cat wordpress-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4-php7.0
# uncomment to fix perm issue, see also https://github.com/kubernetes/kubernetes/issues/2630
# command: ['bash', '-c', 'chown', 'www-data:www-data', '/var/www/html/wp-content/upload', '&&', 'apache2', '-DFOREGROUND']
ports:
- name: http-port
containerPort: 80
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authkey
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinkey
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthkey
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncekey
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authsalt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthsalt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinsalt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncesalt
- name: WORDPRESS_DB_HOST
value: wordpress-db
volumeMounts:
- mountPath: /var/www/html/wp-content/uploads
name: uploads
volumes:
- name: uploads
nfs:
server: us-east-1b.fs-55ed701c.efs.us-east-1.amazonaws.com
path: /
* Notice we put together a string for the NFS share.
* AZ.fs-id.Region.amazonaws.com
* Now lets create our deployment.
$ kubectl create -f wordpress-web.yml
$ cat wordpress-web-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
targetPort: http-port
protocol: TCP
selector:
app: wordpress
type: LoadBalancer
* And now the load balancer for our two nodes
$ kubectl create -f wordpress-web-service.yml
* Now lets find our ELB and create a Route53 DNS name for it.
$ kubectl get services
$ kubectl describe service wordpress
Name: wordpress
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=wordpress
Type: LoadBalancer
IP: 100.70.74.90
LoadBalancer Ingress: acf99336a792b11e78ae102fa50f1a61-516654231.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 30601/TCP
Endpoints: 100.124.209.16:80,100.94.7.215:80
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38m 38m 1 service-controller Normal CreatingLoadBalancer Creating load balancer
38m 38m 1 service-controller Normal CreatedLoadBalancer Created load balancer
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-deployment 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysdig-agent-4sxv2 1/1 Running 0 3d
sysdig-agent-nb2wk 1/1 Running 0 3d
sysdig-agent-z42zj 1/1 Running 0 3d
wordpress-db-79z87 1/1 Running 0 54m
wordpress-deployment-2971992143-c8gg4 0/1 ContainerCreating 0 1m
wordpress-deployment-2971992143-h36v1 1/1 Running 0 1m
I think you actually need to solve 2 issues:
Do not restart the service. Restart only your pod. Then the service won't change its IP address.
Database and your application don't need to be 2 containers in the same pod. They should be 2 separate pods. Then you need another service to expose the database to your application.
So the final solution should be like this:
database pod - runs once, never restarted during development.
database service - created once, never restarted.
application pod - this is the only thing you will reload when the application code is changed. It needs to access the database, so you write literally "database-service:3306" or something like this in your application. "database-service" here is the name of the service you created in (2).
application service - created once, never restarted. You access the application from outside of minikube by using IP address of this service.