Say I have the following k8s config file with 2 identical deployments only different by name.
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-1
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-2
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
As I understand, k8s correlate ReplicaSets and Pods by their label. Therefore, with the above configuration, I guess there will be some problem or k8s will forbid this configuration.
However, it turns out everything is fine. Apart from the labels, are there something else k8s is using to correlate ReplicaSet and Pods?
Related
Steps i have done :
I have two namespaces one with istio injected and another not
Now deploy simple nginx server using this yaml in both namespace
apiVersion: v1
kind: Service
metadata:
name: software-upgrader
labels:
app: software-upgrader
service: software-upgrader
spec:
ports:
- name: http
port: 25301
selector:
app: software-upgrader
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: software-upgrader
spec:
selector:
matchLabels:
app: software-upgrader
version: v1
template:
metadata:
labels:
app: software-upgrader
version: v1
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
imagePullPolicy: IfNotPresent
name: software-upgrader
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
now deploy HTTPS servers in both namespaces by this steps Steps to deploy HTTPS server
now curl it from another pod in both namespace
The Pod with istio not injected would get 200 OK , while istio-injected pod would get
curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
command terminated with exit code 56
Pardon me of my ignorance do i have to create some Service-entry or Virtual Service for HTTPS to happen between Pods in same namespace to happen if istio is injected?
You have to add Protocol to Service port Definition
apiVersion: v1
kind: Service
metadata:
name: test-https-server
labels:
app: test-https-server
service: test-https-server
spec:
ports:
- name: test-https
port: 25302
appProtocol: https
selector:
app: test-https-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-https-server
spec:
selector:
matchLabels:
app: test-https-server
template:
metadata:
labels:
app: test-https-server
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
command: ["bash", "-c", "python3 ThreadedHTTPSServer.py 25302"]
imagePullPolicy: Always
name: test-https-server
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
This has a example of working example
ports:
- name: http
port: 25302
appProtocol: https # Should Specify Protocol
Istio appProtocol configuration doc
i have the fowllowing manifests:
The app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-demo-deployment
labels:
app: hpa-nginx
spec:
replicas: 1
selector:
matchLabels:
app: hpa-nginx
template:
metadata:
labels:
app: hpa-nginx
spec:
containers:
- name: hpa-nginx
image: stacksimplify/kubenginx:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: hpa-demo-service-nginx
labels:
app: hpa-nginx
spec:
type: LoadBalancer
selector:
app: hpa-nginx
ports:
- port: 80
targetPort: 80
and its HPA:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-demo-declarative
spec:
maxReplicas: 10 # define max replica count
minReplicas: 1 # define min replica count
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-demo-deployment
targetCPUUtilizationPercentage: 20 # target CPU utilization
Notice in HPA, the target CPU is set to 20%
My question: which 20% the HPA takes ? is it requests.cpu (ie: 100m) ? or limits.cpu (ie: 200m) ? or something else ?
Thank you!
Its based off of the resources.requests.cpu.
For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each Pod
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work
I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
But when I run I get...
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
What am I missing? It worked fine without the second deployment so what am I missing?
Thank you!
You cannot use the same a PV with accessMode: ReadWriteOnce multiple times.
To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for the list of Volume Plugins that support this feature.
Additionally, as David already menioned, it's much better to log to the STDOUT.
You can also check solutions like FluentBit/fluentd or ELK stack.
I'm deploying APi on kubernetes but POD'S are not creating getting error:
unable to ensure pod container exists: failed to create container for [kubepods burstable pod63951ed1-a42f-44c9-9f85-fbb4c06a3e83] : mkdir /sys/fs/cgroup/memory/kubepods/burstable/pod63951ed1-a42f-44c9-9f85-fbb4c06a3e83: cannot allocate memory
YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
labels:
app: abc
spec:
replicas: 3
selector:
matchLabels:
app: vaccndev
template:
metadata:
labels:
name: abc
app: abc
spec:
hostname: abc
containers:
- name: abc
image: docker.repo1.xyz.com/va_nonuser/server:Dev
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
I've recently started learning Kubernetes and have run into a problem. I'm trying to deploy 2 pods which run the same docker image, to do that I've mentioned replicas: 2 in the deployment.yaml. I've also mentioned a service as a LoadBalancer with external traffic policy as cluster. I confirmed that there are 2 end points by doing kubectl describe service. The session persistence also was set to none. When I send requests repeatedly to the service port, all of them are routed to only one of the pods, the other one just sits there. How can I make this more efficient? Or any insignts as to what might be going wrong? Here's the deployment.yaml file
EDIT: The solutions mentioned on Only 1 pod handles all requests in Kubernetes cluster didn't work for me.
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-nameprodmstb
labels:
app: project-nameprodmstb
spec:
replicas: 2
selector:
matchLabels:
app: project-nameprodmstb
template:
metadata:
labels:
app: project-nameprodmstb
spec:
containers:
- name: project-nameprodmstb
image: <some_image>
imagePullPolicy: Always
resources:
requests:
cpu: "1024m"
memory: "4096Mi"
imagePullSecrets:
- name: "gcr-json-key"
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
labels:
app: project-nameprodmstb
name: project-nameprodmstb
namespace: development
spec:
ports:
- name: project-nameprodmstb
port: 8006
protocol: TCP
targetPort: 8006
selector:
app: project-nameprodmstb
type: LoadBalancer