How to update a container only? - kubernetes

I am working locally with minikube and every time i make a change on the code, i delete the service (and the deployment) and create a new one.
This operation generate a new IP for each container so i also need to update my frontend, and also to insert new data in my db container, since i loose every data every time i delete the service.
It’s way too much wasted time to work efficiently.
I would like to know if there is a way to update a container without generating new IPs, and without deleting the pod (because i don't want to delete my db container everytime i update the backend code)?

It's easy to update the existing Deployment with a new image without necessity to delete it.
Imagine we have a YAML file with the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
To run this deployment, run the following command:
$ kubectl create -f nginx-deployment.yaml --record
(--record - appends the current command to the annotations of the created or updated resource. This is useful for future reviews, such as investigating which commands were executed in each Deployment revision, and for making a rollback.)
To see the Deployment rollout status, run
$ kubectl rollout status deployment/nginx-deployment
To update nginx image version, just run the command:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Or you can edit existing Deployment with the command:
$ kubectl edit deployment/nginx-deployment
To see the status of the Deployment update process, run the command:
$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
or
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
Each time you update the Deployment, it updates the Pods by creating new ReplicaSet, scaling it to 3 replicas, and scaling down old ReplicaSet to 0. If you update the Deployment again during the previous update in progress, it starts to create new ReplicaSet immediately, without waiting for completion of the previous update.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1180356465 3 3 3 4s
nginx-deployment-2538420311 0 0 0 56s
If you made a typo while editing the Deployment (for example, nginx:1.91) you can rollback it to the previous good version.
First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.
To see the details of each revision, run:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
Now you can rollback to the previous version using command:
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
Or you can rollback to a specific version:
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
For more information, please read the part of Kubernetes documentation related to Deployment

First of all, in your front-end, use DNS names instead of IP addresses to reach your backend. This will save you from rebuilding your front-end app every time you deploy your backend.
That being said, there is no need to delete your service just to deploy a new version of your backend. In fact, you just need to update your deployment, making it refer to the new docker image you have built using the latest code for your backend.
Finally, as long as I understand, you have both your application and your database inside the same Pod. This is not a good practice, you should separate them in order not to cause a downtime in your database when you deploy a new version of your code.
As a sidenote, not sure if this is the case, but if you are using minikube as a development environment you're probably doing it wrong. You should use docker alone with volume binding, but that's out of scope of your question.

Use kops and create a production like cluster in AWS on the free tier.
In order to fix this you need to make sure you use a loadbalancer for your frontends. Create a service for your db container exposing the port so your frontends can reach it, and put that in your manifest for your frontends so its static. Service discovery will take care of the ip address and your containers will automatically connect to the ports. You can also setup persistent storage for your DBs. When you update your frontend code, use this to update your containers so nothing will change.
kubectl set image deployment/helloworld-deployment basicnodeapp=buildmystartup/basicnodeapp:2
Here is how I would do a state-full app in production AWS using wordpress for an example.
###############################################################################
#
# Creating a stateful app with persistent storage and front end containers
#
###############################################################################
* Here is how you create a stateful app using volumes and persistent storage for production.
* To start off we can automate the storage volume creation for our mysql server with a storage object and persistent volume claim like so:
$ cat storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
$ cat pv-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
* Lets go ahead and create these so they are ready for our deployment of mysql
$ kubectl create -f storage.yml
storageclass "standard" created
$ kubectl create -f pv-claim.yml
persistentvolumeclaim "db-storage" created
* Lets also create our secrets file that will be needed for mysql and wordpress
$ cat wordpress-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: wordpress-secrets
type: Opaque
data:
db-password: cGFzc3dvcmQ=
# random sha1 strings - change all these lines
authkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OA==
loggedinkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OQ==
secureauthkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MQ==
noncekey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MA==
authsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mg==
secureauthsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mw==
loggedinsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NA==
noncesalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NQ==
$ kubectl create -f wordpress-secrets.yml
* Take note of the names we assigned. We will need these for the mysql deployment
* We created the storage in us-east-1b so lets set a node label for our node in that AZ so our deployment is pushed to that node and can attach our volume.
$ kubectl label nodes ip-172-20-48-74.ec2.internal storage=mysql
node "ip-172-20-48-74.ec2.internal" labeled
* Here is our mysql pod definition. Notice at the bottom we use a nodeSelector
* We will need to use that same one for our deployment so it can reach us-east-1b
$ cat wordpress-db.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-db
spec:
replicas: 1
selector:
app: wordpress-db
template:
metadata:
name: wordpress-db
labels:
app: wordpress-db
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-storage
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
nodeSelector:
storage: mysql
* Before we go on to the deployment lets expose a service on port 3306 so wordpress can connect.
$ cat wordpress-db-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress-db
spec:
ports:
- port: 3306
protocol: TCP
selector:
app: wordpress-db
type: NodePort
$ kubectl create -f wordpress-db-service.yml
service "wordpress-db" created
* Now lets work on the deployment. We are going to use EFS to save all our pictures and blog posts so lets create that on us-east-1b also
* So first lets create our EFS NFS share
$ aws efs create-file-system --creation-token 1
{
"NumberOfMountTargets": 0,
"SizeInBytes": {
"Value": 0
},
"CreationTime": 1501863105.0,
"OwnerId": "812532545097",
"FileSystemId": "fs-55ed701c",
"LifeCycleState": "creating",
"CreationToken": "1",
"PerformanceMode": "generalPurpose"
}
$ aws efs create-mount-target --file-system-id fs-55ed701c --subnet-id subnet-7405f010 --security-groups sg-ffafb98e
{
"OwnerId": "812532545097",
"MountTargetId": "fsmt-a2f492eb",
"IpAddress": "172.20.53.4",
"LifeCycleState": "creating",
"NetworkInterfaceId": "eni-cac952dd",
"FileSystemId": "fs-55ed701c",
"SubnetId": "subnet-7405f010"
}
* Before we launch the deployment lets make sure our mysql server is up and connected to the volume we created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
db-storage Bound pvc-82c889c3-7929-11e7-8ae1-02fa50f1a61c 8Gi RWO standard 51m
* ok status bound means our container is connected to the volume.
* Now lets launch the wordpress frontend of two replicas.
$ cat wordpress-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4-php7.0
# uncomment to fix perm issue, see also https://github.com/kubernetes/kubernetes/issues/2630
# command: ['bash', '-c', 'chown', 'www-data:www-data', '/var/www/html/wp-content/upload', '&&', 'apache2', '-DFOREGROUND']
ports:
- name: http-port
containerPort: 80
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authkey
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinkey
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthkey
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncekey
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authsalt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthsalt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinsalt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncesalt
- name: WORDPRESS_DB_HOST
value: wordpress-db
volumeMounts:
- mountPath: /var/www/html/wp-content/uploads
name: uploads
volumes:
- name: uploads
nfs:
server: us-east-1b.fs-55ed701c.efs.us-east-1.amazonaws.com
path: /
* Notice we put together a string for the NFS share.
* AZ.fs-id.Region.amazonaws.com
* Now lets create our deployment.
$ kubectl create -f wordpress-web.yml
$ cat wordpress-web-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
targetPort: http-port
protocol: TCP
selector:
app: wordpress
type: LoadBalancer
* And now the load balancer for our two nodes
$ kubectl create -f wordpress-web-service.yml
* Now lets find our ELB and create a Route53 DNS name for it.
$ kubectl get services
$ kubectl describe service wordpress
Name: wordpress
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=wordpress
Type: LoadBalancer
IP: 100.70.74.90
LoadBalancer Ingress: acf99336a792b11e78ae102fa50f1a61-516654231.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 30601/TCP
Endpoints: 100.124.209.16:80,100.94.7.215:80
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38m 38m 1 service-controller Normal CreatingLoadBalancer Creating load balancer
38m 38m 1 service-controller Normal CreatedLoadBalancer Created load balancer
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-deployment 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysdig-agent-4sxv2 1/1 Running 0 3d
sysdig-agent-nb2wk 1/1 Running 0 3d
sysdig-agent-z42zj 1/1 Running 0 3d
wordpress-db-79z87 1/1 Running 0 54m
wordpress-deployment-2971992143-c8gg4 0/1 ContainerCreating 0 1m
wordpress-deployment-2971992143-h36v1 1/1 Running 0 1m

I think you actually need to solve 2 issues:
Do not restart the service. Restart only your pod. Then the service won't change its IP address.
Database and your application don't need to be 2 containers in the same pod. They should be 2 separate pods. Then you need another service to expose the database to your application.
So the final solution should be like this:
database pod - runs once, never restarted during development.
database service - created once, never restarted.
application pod - this is the only thing you will reload when the application code is changed. It needs to access the database, so you write literally "database-service:3306" or something like this in your application. "database-service" here is the name of the service you created in (2).
application service - created once, never restarted. You access the application from outside of minikube by using IP address of this service.

Related

Kubernetes doesn't recognises local docker image

I have the following deployment and i use the image:flaskapp1:latest
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
selector:
matchLabels:
app: flaskapp
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: flaskapp1:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: flaskapp
spec:
ports:
- name: http
port: 9090
targetPort: 8080
selector:
app: flaskapp
Because the kubernetes cluster that i have created has only 2 nodes(master node and worker node) the pod is created in worker node where i have locally created the docker image.
More specific if i run
sudo docker images
I have the follwing output:
REPOSITORY TAG IMAGE ID CREATED SIZE
flaskapp1 latest 4004bc4ea926 34 minutes ago 932MB
For some reason when i apply the deployment above the status is ErrImagePull. Is there any wrong in my yaml file?
When i run kubectl get pods -o wide i have the following output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flaskapp-55dbdfd6cf-952v8 0/1 ImagePullBackOff 0 7m44s 192.168.69.220 knode2 <none> <none>
Also
Glad, that it works now, but I would like to add some information for others, that might encounter this problem.
In general use a real registry to provide the images. This can be hosted using kubernetes as well, but it must be accessible from outside since the nodes will access it directly to retrieve images.
You should provide TLS secured access to the registry since container runtimes will not allow access to external hosts without a certificate or special configuration.
If you want to experiment with images and don't want to use a public registry or run your own you might be interested in an ephemeral registry: https://ttl.sh/

Kubernetes Service unreachable

I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)

How to set fixed pods names in kubernetes

I want to maintain different configuration for each pod, so planning to fetch properties from spring cloud config based on pod name.
Ex:
Properties in cloud
PodName1.property1 = "xxx"
PodName2.property1 ="yyy";
Property value will be different for each pod. Planning to fetch properties from cloud ,based on container name Environment.get("current pod name"+ " propertyName").
So I want to set fixed hostname/pod name
If the above is not possible, is there any alternative ?
You can use statefulsets if you want fixed pod names for your application.
e.g.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # this will be used as prefix in pod name
spec:
serviceName: "nginx"
replicas: 2 # specify number of pods that should be running
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
This template will create 2 pods of nginx in default namespace with names as following:
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
A basic example can be found here.

Kubernetes statefulset ends in completed state

I'm running a k8s cluster on Google GKE where I have a statefulsets running Redis and ElasticSearch.
So every now and then the pods end up in a completed state and so they aren't running anymore and my services depending on it fail.
These pods will also never restart by themselves, a simple kubectl delete pod x will resolve the problem but I want my pods to heal by themselves.
I'm running the latest version available 1.6.4, I have no clue why they aren't pickup and restarted like any other regular pod. Maybe I'm missing something obvious.
edit: I've also notice the pod get a termination signal and shuts down properly so I'm wondering where that is coming from. I'm not manually shutting down and I experience the same with ElasticSearch
This is my statefulset resource declaration:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
ports:
- name: redis-server
containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-storage
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Check the version of docker you run, and whether the docker daemon was restarted during that time.
If the docker daemon was restarted, all the container would be terminated (unless you use the new "live restore" feature in 1.12). In some docker versions, docker may incorrectly reports "exit code 0" for all containers terminated in this situation. See https://github.com/docker/docker/issues/31262 for more details.
source: https://stackoverflow.com/a/43051371/5331893
I am using same configuration as you but removing the annotation in the volumeClaimTemplates since I am trying this on minikube:
$ cat sc.yaml
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
ports:
- name: redis-server
containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Now trying to simulate the case where redis fails, so execing into the pod and killing the redis server process:
$ k exec -it redis-0 sh
/data # kill 1
/data # $
See that immediately after the process dies I can see that the STATUS has changed to Completed:
$ k get pods
NAME READY STATUS RESTARTS AGE
redis-0 0/1 Completed 1 38s
It took some time for me to get the redis up and running:
$ k get pods
NAME READY STATUS RESTARTS AGE
redis-0 1/1 Running 2 52s
But soon after that I could see it starting the pod, can you see the events triggered when this happened? Like was there a problem when re-attaching the volume to the pod?

no object named "pod" is registered

What does this mean?
-bash-4.2# kubectl create -f ./pod.yaml
Error: unable to recognize "./pod.yaml": no object named "pod" is registered
pod.yaml, capitalizing or not capitalizing 'pod' makes no difference. Validates as proper YAML.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8080
Can you please run kubectl version and report the results? I expect that either your apisver or kubectl version is outdated, and thus doesn't know about the v1 API.
For what it's worth, that pod spec works for me with both kubectl and my apiserver at version 1.0.3.
i was able to create the pod
master $ kubectl create -f pod.yaml
pod/nginx created
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 1m
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8080
most likely issue is with the kubernetes on your server. can you check k8s components health status. i dont see any issue with the pod manifest. it should work