I've recently started learning Kubernetes and have run into a problem. I'm trying to deploy 2 pods which run the same docker image, to do that I've mentioned replicas: 2 in the deployment.yaml. I've also mentioned a service as a LoadBalancer with external traffic policy as cluster. I confirmed that there are 2 end points by doing kubectl describe service. The session persistence also was set to none. When I send requests repeatedly to the service port, all of them are routed to only one of the pods, the other one just sits there. How can I make this more efficient? Or any insignts as to what might be going wrong? Here's the deployment.yaml file
EDIT: The solutions mentioned on Only 1 pod handles all requests in Kubernetes cluster didn't work for me.
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-nameprodmstb
labels:
app: project-nameprodmstb
spec:
replicas: 2
selector:
matchLabels:
app: project-nameprodmstb
template:
metadata:
labels:
app: project-nameprodmstb
spec:
containers:
- name: project-nameprodmstb
image: <some_image>
imagePullPolicy: Always
resources:
requests:
cpu: "1024m"
memory: "4096Mi"
imagePullSecrets:
- name: "gcr-json-key"
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
labels:
app: project-nameprodmstb
name: project-nameprodmstb
namespace: development
spec:
ports:
- name: project-nameprodmstb
port: 8006
protocol: TCP
targetPort: 8006
selector:
app: project-nameprodmstb
type: LoadBalancer
Related
I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.
I am trying to deploy a sample application on the Kubernetes cluster but I am facing the error:
error parsing hostname.yml: error converting YAML to JSON: YAML: line 21: did not find expected key
Below is my hostaname.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-v1
spec:
replicas: 1
selector:
matchLabels:
app: hostname
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: akslearning.azurecr.io/hostname:v1
imagePullPolicy: Always
name: hostname
resources: {}
restartPolicy: Always
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hostname
sessionAffinity: None
type: LoadBalancer
Kindly suggest what to do
Here are a few issues with formating. As first, if you are using more than 1 resource (Deployment and Service) you must separate them using ---. Second thing, is that YAML format is strict, you cannot use TABs, only whitesigns using spacebar to use correct indent.
Please find, corrected YAMLs below:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-v1
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: hostname
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: hostname
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
type: LoadBalancer
selector:
app: hostname
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
EOF
deployment.apps/hostname-v1 created
service/hostname created
However, I would also add .metadata.labels in your Deployemnt YAML as mentioned in Kubernetes documentation
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-v1
labels: #This line
app: hostname #And this line
spec:
...
If you will check this YAML in online YAML to JSON converter, like codebeautify, it works perfectly.
I have a Kubernetes cluster with 1 control-plane and 1 worker, the worker has in it 3 pods. The pods and service with Type: NodePort are on the same node. I was expecting the service to load balance the requests between the pods but looks like all the requests are always getting forwarded to only one pod.
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30002
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: webimage
ports:
- containerPort: 80
imagePullPolicy: Never
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
~
This is expected behavior if your requests have persistent TCP connection. Try adding "connection":"close" in your HTTP header.
I have a flask app running on a remote Kubernetes cluster and when I'm accessing it on the inside it works. However, when I'm trying to access it from the outside nothing happens.
I'm using kind to create the cluster. Locally I can access the flask app via node's IP address.
I'm don't know how to access the service from the outside, do I need to do something else to be able to access the app.
apiVersion: v1
vi serkind: Service
metadata:
name: iweblens-svc
labels:
app: flaskapp
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
selector:
app: flaskapp
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
nodes:
- role: control-plane
- role: worker
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: myimage
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Create a NodePort or LoadBalancer (works only on supported cloud providers) service to expose the deployment outside the cluster.
Here is a guide on how to use NodePort service.
To be be able to access an app via NodePort service the Node IP need to be reachable(i.e should be in same network) from the system where you are accessing it.
I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies.
What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.
Below is my deployment and service YAML.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
StatefulSets are valuable for applications that require stable, persistent storage. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety. The "volumeClaimTemplates" part in yaml will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
In your case, StatefulSet file definition will look similar to this:
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
labels:
app: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
name: smp-activemq
targetPort: 61616
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: smp-activemq
spec:
selector:
matchLabels:
app: smp-activemq
serviceName: smp-activemq
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
name: smp-activemq
volumeMounts:
- name: www
mountPath: <mount-path>
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "<storageclass-name>"
resources:
requests:
storage: 1Gi
That what you need to define is your StorageClass name and mountPath. I hope it will helps you.
In high-level terms, what you want is a StatefulSet instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "Persistent Volume", which is made available to the pods in your StatefulSet using a "Volume Mount".
These are the things you need to look up.