I'm currently creating a Kubernetes cluster for a production environment.
In my cluster, I have 2 node-pool, let's call them api-pool and web-pool
In my api-pool, I have 2 nodes with 4CPU and 15Gb of RAM each.
I'm trying to deploy 8 replicas of my api in my api-pool, each replicas should have 1CPU and 3.5Gi of RAM.
My api.deployment.yaml looks something like this :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
But my problem is that Kubernetes is deploying the pods on nodes on my web-pool as well as in my api-pool but I want those pods to be deployed only in my api-pool.
I tried to label my nodes of the api-pool to use a selector that matches labels but it doesn't work and I'm not sure it's supposed to work that way.
How can I precise to K8s to deploy those 8 replicas only in my api-pool ?
You can use a nodeselector which is the simplest recommended form of node selection constraint.
label the nodes of api-pool with pool=api
kubectl label nodes nodename pool=api
Add nodeSelector in pod spec.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dev
spec:
replicas: 8
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api-docker
image: //MY_IMAGE
imagePullPolicy: Always
envFrom:
- configMapRef:
name: api-dev-env
- secretRef:
name: api-dev-secret
ports:
- containerPort: 80
resources:
requests:
cpu: "1"
memory: "3.5Gi"
nodeSelector:
pool: api
For mode advanced use cases you can use node affinity.
Related
I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value
Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies.
What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.
Below is my deployment and service YAML.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
StatefulSets are valuable for applications that require stable, persistent storage. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety. The "volumeClaimTemplates" part in yaml will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
In your case, StatefulSet file definition will look similar to this:
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
labels:
app: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
name: smp-activemq
targetPort: 61616
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: smp-activemq
spec:
selector:
matchLabels:
app: smp-activemq
serviceName: smp-activemq
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
name: smp-activemq
volumeMounts:
- name: www
mountPath: <mount-path>
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "<storageclass-name>"
resources:
requests:
storage: 1Gi
That what you need to define is your StorageClass name and mountPath. I hope it will helps you.
In high-level terms, what you want is a StatefulSet instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "Persistent Volume", which is made available to the pods in your StatefulSet using a "Volume Mount".
These are the things you need to look up.
I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
I was curious about how Kubernetes controls replication. I my config yaml file specifies I want three pods, each with an Nginx server, for instance (from here -- https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#how-a-replicationcontroller-works)
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
How does Kubernetes know when to shut down pods and when to spin up more? For example, for high traffic loads, I'd like to spin up another pod, but I'm not sure how to configure that in the YAML file so I was wondering if Kubernetes has some behind-the-scenes magic that does that for you.
Kubernetes does no magic here - from your configuration, it does simply not know nor does it change the number of replicas.
The concept you are looking for is called an Autoscaler. It uses metrics from your cluster (need to be enabled/installed as well) and can then decide, if Pods must be scaled up or down and will in effect change the number of replicas in the deployment or replication controller. (Please use a deployment, not replication controller, the later does not support rolling updates of your applications.)
You can read more about the autoscaler here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
You can use HorizontalPodAutoscaler along with deployment as below. This will autoscale your pod declaratively based on target CPU utilization.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: $DEPLOY_NAME
spec:
replicas: 2
template:
metadata:
labels:
app: $DEPLOY_NAME
spec:
containers:
- name: $DEPLOY_NAME
image: $DEPLOY_IMAGE
imagePullPolicy: Always
resources:
requests:
cpu: "0.2"
memory: 256Mi
limits:
cpu: "1"
memory: 1024Mi
---
apiVersion: v1
kind: Service
metadata:
name: $DEPLOY_NAME
spec:
selector:
app: $DEPLOY_NAME
ports:
- port: 8080
type: ClusterIP
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: $DEPLOY_NAME
namespace: $K8S_NAMESPACE
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: $DEPLOY_NAME
minReplicas: 2
maxReplicas: 6
targetCPUUtilizationPercentage: 60
I have a sample application running on a Kubernetes cluster. Two microservices, one is a mongodb container and the other is a java springboot container.
The springboot container interacts with the mongodb container thro a service and stores data into the mongodb container.
The specs are provided below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: 11.168.xx.xx:5000/employee:latest
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: empapp
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 1
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
I would like to know how this communication can be accomplished in spinnaker since it creates its own labels and selectors.
Thanks,
This is how it needs to be done.
Each loadbalancer created for the application is the service. So for mongodb application, after a loadbalancer is created with the nodeport settings, get the name of the service eg: mongodb-dev. The server group for mongodb also needs to be created.
Then when creating the employee server group, you need to specify the commands one by one in a separate line for that container as mentioned here
https://github.com/spinnaker/spinnaker/issues/2021#issuecomment-334885467
"java","-Dspring.data.mongodb.uri=mongodb://name-of-mongodb-service/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"
Now when the employee and mongodb pod starts, it is able to get its mapping and able to communicate properly.