how to refer ReplicaSet in deployment? - kubernetes

I am new to kubernetes and trying to create a deployment. So first I created a replicaset named rs.yml shown below.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
and applied it using
kubectl apply -f rs.yml
now Instead rewriting all this in deployment. I just want to refer this 'rs.yml' file or service inside my deployment.yml file.

You can create single YAMl file and add both thing deployment service inside it.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
you can divide the things using the --- make the single YAML file.
also, another suggestion don't use the Replicasets by default Deployment create the Replicaset in the background.
You can use the kind: deployment can check kubectl get rc still replica set will be there. Deployment creates it in the background and manage it.

Related

load balancer not reachable after creating as service

I have deployed simple app -NGINX and a Load balancer service in Kubernetes.
I can see that pods are running as well as service but calling Loadbalancer external IP is givings server error -site can't be reached .Any suggestion please
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Service.Yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
P.S. -Attached outcome from terminal.
If you are using Minikube to access the service then you might need to run one extra command. But if this is on a cloud provider then you have an error in your service file.
Please ensure that you put two space in yaml file but your indentation of the yaml file is messed up as you have only added 1 space. Also you made a mistake in the last line of service.yaml file.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

how can i set kubectl scale deployment into deployment file?

After setup my kubernetes cluster on GCP i used command kubectl scale deployment superappip--replicas=30 from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?
The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
you can follow more here.

What is the URL to this Kubernetes service/pod/docker thing

I need to hard code the address of a couchDB instance to another server in my kubernetes cluster. I'm not super familiar with kubernetes but I know that IP will change each time the cluster is rebuilt or the pod is rebuilt. So I can't use that.
What is the URL to this kubernetes service/what should I hard code into my Server Docker Image so it will alway find the CouchDB server in the system. I think it will be in this format
<service-name>.<namespace>.svc.cluster.local:<service-port>
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: orderer
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
If "wget 172.17.0.2:5984" works what should "172.17.0.2" be replaced with
The following is not correct
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.kino-couch.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.svc.cluster.local:5984
For StatefulSet you need to create a Headless service to be responsible for the network identity of the Pods proving stable DNS entries. Notice clusterIP: None in below example.
apiVersion: v1
kind: Service
metadata:
name: couch-service
labels:
app: kino-couch
spec:
ports:
- port: 5984
clusterIP: None
selector:
app: kino-couch
The statefulset need to refer to the above service in serviceName. So the statefulset yaml would look like below
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: couch-service
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
Then as a client you can access it using couch-service.<namespace>.svc.cluster.local:5984 to connect to a any of the CouchDB pods.
If you want to connect to a specific pod then use kino-couch-0.couch-service.<namespace>.svc.cluster.local:5984. This is typically needed for connecting the couchDB pods between themselves to create a cluster.

Kubernetes connect service and deployment

I am wondering what to specify in a separate deployment in order to have it access a DB deployment/service. Here is the DB deployment/service:
apiVersion: v1
kind: Service
metadata:
name: oracle-db
labels:
app: oracle-db
spec:
ports:
- name: oracle-db
port: 1521
protocol: TCP
targetPort: 1521
selector:
app: oracle-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oracle-db-depl
labels:
app: oracle-db
spec:
selector:
matchLabels:
app: oracle-db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: oracle-db
spec:
containers:
- name: oracle-db
image: oracledb:latest
imagePullPolicy: Always
ports:
- containerPort: 1521
env:
...
How exactly do I specify the connection in the separate deployment? Do I specify the oracle-db service name somewhere? So far I specify a containerPort in the container.
If the other app deployment is in the same namespace you can refer to the oracle service by oracle-db. Here is an example of a word-press application using oracle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: oracle-db
ports:
- containerPort: 80
name: wordpress
As you can see oracle service is being referred by oracle-db as an environment variable.
If the service is in different namespace than the app deployment then you can refer to it as oracle-db.namespacename.svc.cluster.local
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Services in Kubernetes are an "abstract way to expose an application running on a set of Pods as a network service." (k8s documentation)
You can access your pod by its IP and port that Kubernetes have given to it, but that's not a good practice as the Pods can die and another one will be created (if controlled by a Deployment/ReplicaSet). When the new one is created, a new IP will be used, and everything on your app will start to fail.
To solve this you can expose your Pod using a Service (as you already have done), and use service-name:service-port assigned to the Service to access your Pod. In this case, even if the Pod dies and a new one is created, Kubernetes will keep forwarding the traffic to the right Pod.

How to create multiple instances of Mediawiki in a Kubernetes Cluster

I´m about to deploy multiple Mediawiki instances on my Kubernetes-cluster.
In my case the YAML deploymentfile for the DB (MySQL) works as it supposed to do, the deploymentfile for Mediawiki deploys as many pods as expected, but I can´t access them from outside of the cluster even if I create a Service for this case.
If I try to create one single Mediawiki pod and a service to access it from outside of the cluster it works as it should. If I try to create a deploymentfile for Mediawiki equal to the one for MySQL it does creates the pods and the requiered service but it´s not accessible from the externel-IP assigned to it.
My deploymentfile for Mediawiki:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
app: mediawiki
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediawiki
spec:
replicas: 6
selector:
matchLabels:
app: mediawiki
strategy:
type: Recreate
template:
metadata:
labels:
app: mediawiki
spec:
containers:
- image: mediawiki
name: mediawiki
ports:
- containerPort: 80
name: mediawiki
This is the pod-definition file:
apiVersion: v1
kind: Pod
metadata:
name: mediawiki-pod
labels:
name: mediawiki-pod
app: mediawiki
spec:
containers:
- name: mediawiki
image: mediawiki
ports:
- containerPort: 80
This is the service-definition file:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
The accual resault should be that I can deploy multiple instances of Mediawiki on my cluster and can access them from outside with the externel-IP.
If you look at kubectl describe service mediawiki-service in both scenarios, I expect you will see that in the single-pod case, there is an Endpoints: list that includes a single IP address (the pod's, but that's an implementation detail) but in the deployment case, it says <none>.
Your Service only matches pods that have both name and app labels:
apiVersion: v1
kind: Service
spec:
selector:
name: mediawiki-pod
app: mediawiki
But the pods deployed by your deployment only have app labels:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app: mediawiki
So at that specific point (the labels inside the template for the deployment; also adding them at the top level doesn't hurt, but this embedded point is what's important) you need to add the second label name: mediawiki-pod.
If you want to deploy multiple instances of some piece of software on Kubernetes cluster it's good idea to check out if there is a helm chart for it.
In your case the answer is positive - there is a stable helm chart for Mediawiki.
Creating multiple instances is as easy as creating multiple releases, for example:
helm install --name wiki1 stable/mediawiki
helm install --name wiki2 stable/mediawiki
helm install --name wiki3 stable/mediawiki
To use Helm you have to install it on your local machine and on k8s cluster - following the quick start guide will be enough.