I'm following this tutorial and making changes as necessary to set up a self hosted instance of Ghost blog. I'm new to Kubernetes, and am self hosting this locally on some Raspberry Pis. I applied all deployments, services, myqsl, secrets, PVCs etc, and added ghost to /etc/hosts. When i visit ghost/ in browser, I get a 404 error. Even though I'm targeting the service. Here are my YAMLs:
MySQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: longhorn
app: example
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Ghost PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
labels:
type: longhorn
app: ghost
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
MySQL Password Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: <base_64_encoded_pwd>
Ghost SQL deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: arm64v8/mysql:latest
imagePullPolicy: Always
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
value: ghost
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-vol
mountPath: /var/lib/mysql
volumes:
- name: mysql-vol
persistentVolumeClaim:
claimName: mysql-pv-claim
Ghost Blog Deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-svc
labels:
app: ghost
tier: frontend
spec:
selector:
app: ghost
tier: frontend
ports:
- protocol: TCP
port: 2368
targetPort: 2368
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-deploy
spec:
replicas: 1
selector:
matchLabels:
app: ghost
tier: frontend
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 50
containers:
- name: blog
image: ghost
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
# - name: url
# value: https://www.myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: ghost-mysql
- name: database__connection__user
value: root
- name: database__connection__password
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: database__connection__database
value: ghost
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-vol
volumes:
- name: ghost-vol
persistentVolumeClaim:
claimName: ghost-pv-claim
Traefik Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ghost-ingress
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`ghost`)
kind: Rule
services:
- name: ghost-svc
port: 80
Added ghost to /etc/hosts (Mac) also.
Not sure what I'm doing wrong but I imagine its certs / ingress related. Any ideas?
I am using the ELK stack (elasticsearch, logstash, kibana) for log processing and analysis in a Kubernetes environment. To capture logs I am using filebeat.
The service account, the cluster role, and the cluster role binding of elasticsearch below yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch
labels:
k8s-app: elasticsearch
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch
labels:
k8s-app: elasticsearch
subjects:
- kind: ServiceAccount
name: elasticsearch
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch
apiGroup: ""
elasticsearch service yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch
externalIPs:
- 10.10.0.82
Elastic search statesul set yaml below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
serviceName: elasticsearch
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
serviceAccountName: elasticsearch
containers:
- image: elasticsearch:6.8.4
name: elasticsearch
resources:
limits:
cpu: 1000m
memory: "2Gi"
requests:
cpu: 100m
memory: "1Gi"
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: data
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
k8s-app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
pv & pvc0 yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv0
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-0
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
pv_pvc1.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv1
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-1
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
logstash_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: kube-system
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
externalIPs:
- 10.10.0.82
logstash_config.yaml
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: kube-system
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["http://10.10.0.82:9200"]
}
}
logstash deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ["http://10.10.0.82:5044"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.8.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
kibana.yaml
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.8.4
env:
- name: ELASTICSEARCH_URL
value: http://10.10.0.82:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 32010
selector:
k8s-app: kibana-logging
kubectl get svc -n kube-system
elasticsearch ClusterIP 10.43.50.63 10.10.0.82 9200/TCP 31m
kibana-logging NodePort 10.43.58.127 10.10.0.82 5601:32010/TCP 4m4s
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d
logstash-service ClusterIP 10.43.130.36 10.10.0.82 5044/TCP 30m
filebeat pod logs :
2020-11-04T16:42:22.857Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203-json.log
2020-11-04T16:42:22.983Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://logstash-service:9600))
2020-11-04T16:42:52.412Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":270,"time":{"ms":271}},"total":{"ticks":740,"time":{"ms":745},"value":740},"user":{"ticks":470,"time":{"ms":474}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":97},"info":{"ephemeral_id":"6584086a-eff4-46b5-9be0-93892dad9d97","uptime":{"ms":30191}},"memstats":{"gc_next":36421840,"memory_alloc":32140904,"memory_total":55133048,"rss":65593344}},"filebeat":{"events":{"active":4214,"added":4219,"done":5},"harvester":{"open_files":89,"running":88,"started":88}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"type":"logstash"},"pipeline":{"clients":2,"events":{"active":4117,"filtered":88,"published":4116,"total":4205}}},"registrar":{"states":{"current":5,"update":5},"writes":{"success":6,"total":6}},"system":{"cpu":{"cores":8},"load":{"1":1.9,"15":0.61,"5":0.9,"norm":{"1":0.2375,"15":0.0763,"5":0.1125}}}}}}
2020-11-04T16:42:54.289Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash-service:5044)): dial tcp 10.43.145.162:5044: i/o timeout
2020-11-04T16:42:54.289Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://logstash-service:5044)) with 1 reconnect attempt(s)
logstash pod logs :
[WARN ] 2020-11-04 15:45:04.648 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
What I have understood from your architecture, you are using Filebeat >> Logstash >> Elasticsearch >> Kibana
So, in the filebeat.yml, you have selected output as logstash. But, you have given wrong port for logstash output in filebeat.yml.
It should be:
output.logstash:
hosts: ['http://195.134.187.25:5044']
As, if you see in logstash_config.yaml, you have given 5044 as beats input. So, make the changes in filebeat.yml in output.logstash
I am using gcePersistentDisk as volume in kubernetes. Currently all my VM , Disks are on under same availability zone and account of google cloud. But still gcePersistentDisk is not mounting. Below are my deployment file.
Stoage Class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-sc
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain # Retain storage even if we delete PVC
parameters:
type: pd-ssd
Volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
storageClassName: ssd-sc
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
Service and Containers
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-service
spec:
selector:
matchLabels:
app: apache
replicas: 1
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: service-pv-storage
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/www/html"
name: service-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30751
type: NodePort
When i am using this deployment it give me following error on pod descriptions.
Note :- I am not using kubernetes google engine. I have setup my custom cluster.
I am new to k8s. I am following official tutorial on setting up Nginx pods in k8s using minikube, mounting a volume and serving index.html.
When I mount and go to hompe page, i receive this error that
directory index of "/usr/share/nginx/html/" is forbidden.
If I dont mount anything, i receive a "Welcome to Nginx" page.
This is the content of that folder before mount. And after mount is empty
root#00c1:/usr/share/nginx/html# ls -l
total 8
-rw-r--r-- 1 root root 494 Jul 23 11:45 50x.html
-rw-r--r-- 1 root root 612 Jul 23 11:45 index.html
Why is mounted folder inside pod empty after mounting?
This is my setup
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
Any info would be appreciated. thanks
I've checked your configuration on my running k8s environment. After some adjustments the following manifest works smoothly for me:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
volumeName: task-pv-volume
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-volume
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-volume
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
The owner of the directory "/usr/share/nginx/html" will be 1000 because you have set the SecurityContext fsGroup values. Hence you have unable to access the directory. if you remove the SecurityContext section, the owner of the mounted volume will be set as root. You won't get access issues.
Cluster:
1 master
2 workers
I am deploying StatefulSet using the local-volume using the PV (kubernetes.io/no-provisioner storageClass) with 3 replicas.
Created 2 PV for Both worker nodes.
Expectation: pods will be scheduled on both workers and sharing the same volume.
result: 3 stateful pods are created on single worker node.
yaml :-
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-2
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node2
---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: test-headless
port: 8000
clusterIP: None
selector:
app: test
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: test
spec:
ports:
- name: test
port: 8000
protocol: TCP
nodePort: 30063
type: NodePort
selector:
app: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-stateful
spec:
selector:
matchLabels:
app: test
serviceName: stateful-service
replicas: 6
template:
metadata:
labels:
app: test
spec:
containers:
- name: container-1
image: <Image-name>
imagePullPolicy: Always
ports:
- name: http
containerPort: 8000
volumeMounts:
- name: localvolume
mountPath: /tmp/
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim
This happened because Kubernetes doesn't care about distribution. It has the mechanism for providing specific distribution called Pod Affinity.
For distributing pods on all workers, you may use Pod Affinity.
Furthermore, you can use soft affinity (the differences I explain here ), it isn't strict and allows to spawn all your pods. For example, StatefulSet will look like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: app-name
image: k8s.gcr.io/super-app:0.8
ports:
- containerPort: 21
name: web
This StatefulSet will try to spawn each pod on a new worker; if there are not enough workers, it will spawn the pod on the node where the pod already exists.