how can i set kubectl scale deployment into deployment file? - kubernetes

After setup my kubernetes cluster on GCP i used command kubectl scale deployment superappip--replicas=30 from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
you can follow more here.

Related

how to refer ReplicaSet in deployment?

I am new to kubernetes and trying to create a deployment. So first I created a replicaset named rs.yml shown below.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
and applied it using
kubectl apply -f rs.yml
now Instead rewriting all this in deployment. I just want to refer this 'rs.yml' file or service inside my deployment.yml file.
You can create single YAMl file and add both thing deployment service inside it.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
you can divide the things using the --- make the single YAML file.
also, another suggestion don't use the Replicasets by default Deployment create the Replicaset in the background.
You can use the kind: deployment can check kubectl get rc still replica set will be there. Deployment creates it in the background and manage it.

collect metrics from different pods in prometheus

i want to collect metrics from a deployment (with multiple pods) from Kubernetes, and on of my metrics is the number of calls that my deployment received, my question is about Prometheus, how can i tell Prometheus to call all the pods that are part of the deployment and collect metrics from them? And what is the best practice to achieve this goal?
I would highly recommend using prometheus-operator to do all heavy lifting with configuring Prometheus monitoring for your applications.
For example, having the Deployment and Service like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
You may configure ServiceMonitor object which will use Service as a service discovery endpoint to find all the pods of the Deployment. This assumes that your application is exposing metrics using HTTP path /metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
This will make Prometheus scrape metrics for your application.
You may read more about ServiceMonitors here: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md

Error deploying a pod in a kubernetes cluster

I'm trying to deploy this yaml in my kubernetes cluster into one of my nodes
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
But when I try to deploy it with the command, I get this error message
pi#k8s-master-rasp4:~ $ kubectl apply -f despliegue-nginx.yaml -l kubernetes.io/hostname=k8s-worker-1
error: no objects passed to apply
Anyone knows where the problem could be?
Thanks
You are not allowed to use label selector (-l) with kubectl apply....
Use nodeSelector to assign pods to specific nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
kubernetes.io/hostname: k8s-worker-1 # <-- updated here!
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

Reuse Load Balancer for K8s Services

I have just set up my first K8s cluster in Oracle Cloud. And have a website running in it.
Is there a way to use one LB instead of creating one for each K8s service?
Take a look at this code from the Oracle documentation
Here we create a LB only for this service. I would like to create one LB for my K8s Services so I only have to set up TSL in one place. So can I in the Deployment file point to an existing LB or do I just create the service and then point the LB at the service?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
This isn't possible: OKE will always create a new load balancer for each new exposed service.
Regards

What is the URL to this Kubernetes service/pod/docker thing

I need to hard code the address of a couchDB instance to another server in my kubernetes cluster. I'm not super familiar with kubernetes but I know that IP will change each time the cluster is rebuilt or the pod is rebuilt. So I can't use that.
What is the URL to this kubernetes service/what should I hard code into my Server Docker Image so it will alway find the CouchDB server in the system. I think it will be in this format
<service-name>.<namespace>.svc.cluster.local:<service-port>
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: orderer
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
If "wget 172.17.0.2:5984" works what should "172.17.0.2" be replaced with
The following is not correct
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.kino-couch.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.svc.cluster.local:5984
For StatefulSet you need to create a Headless service to be responsible for the network identity of the Pods proving stable DNS entries. Notice clusterIP: None in below example.
apiVersion: v1
kind: Service
metadata:
name: couch-service
labels:
app: kino-couch
spec:
ports:
- port: 5984
clusterIP: None
selector:
app: kino-couch
The statefulset need to refer to the above service in serviceName. So the statefulset yaml would look like below
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: couch-service
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
Then as a client you can access it using couch-service.<namespace>.svc.cluster.local:5984 to connect to a any of the CouchDB pods.
If you want to connect to a specific pod then use kino-couch-0.couch-service.<namespace>.svc.cluster.local:5984. This is typically needed for connecting the couchDB pods between themselves to create a cluster.