Kubernetes connect service and deployment - kubernetes

I am wondering what to specify in a separate deployment in order to have it access a DB deployment/service. Here is the DB deployment/service:
apiVersion: v1
kind: Service
metadata:
name: oracle-db
labels:
app: oracle-db
spec:
ports:
- name: oracle-db
port: 1521
protocol: TCP
targetPort: 1521
selector:
app: oracle-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oracle-db-depl
labels:
app: oracle-db
spec:
selector:
matchLabels:
app: oracle-db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: oracle-db
spec:
containers:
- name: oracle-db
image: oracledb:latest
imagePullPolicy: Always
ports:
- containerPort: 1521
env:
...
How exactly do I specify the connection in the separate deployment? Do I specify the oracle-db service name somewhere? So far I specify a containerPort in the container.

If the other app deployment is in the same namespace you can refer to the oracle service by oracle-db. Here is an example of a word-press application using oracle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: oracle-db
ports:
- containerPort: 80
name: wordpress
As you can see oracle service is being referred by oracle-db as an environment variable.
If the service is in different namespace than the app deployment then you can refer to it as oracle-db.namespacename.svc.cluster.local
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

Services in Kubernetes are an "abstract way to expose an application running on a set of Pods as a network service." (k8s documentation)
You can access your pod by its IP and port that Kubernetes have given to it, but that's not a good practice as the Pods can die and another one will be created (if controlled by a Deployment/ReplicaSet). When the new one is created, a new IP will be used, and everything on your app will start to fail.
To solve this you can expose your Pod using a Service (as you already have done), and use service-name:service-port assigned to the Service to access your Pod. In this case, even if the Pod dies and a new one is created, Kubernetes will keep forwarding the traffic to the right Pod.

Related

How does gateway connect other services in Kubernetes?

I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local

Reuse Load Balancer for K8s Services

I have just set up my first K8s cluster in Oracle Cloud. And have a website running in it.
Is there a way to use one LB instead of creating one for each K8s service?
Take a look at this code from the Oracle documentation
Here we create a LB only for this service. I would like to create one LB for my K8s Services so I only have to set up TSL in one place. So can I in the Deployment file point to an existing LB or do I just create the service and then point the LB at the service?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
This isn't possible: OKE will always create a new load balancer for each new exposed service.
Regards

What is the URL to this Kubernetes service/pod/docker thing

I need to hard code the address of a couchDB instance to another server in my kubernetes cluster. I'm not super familiar with kubernetes but I know that IP will change each time the cluster is rebuilt or the pod is rebuilt. So I can't use that.
What is the URL to this kubernetes service/what should I hard code into my Server Docker Image so it will alway find the CouchDB server in the system. I think it will be in this format
<service-name>.<namespace>.svc.cluster.local:<service-port>
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: orderer
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
If "wget 172.17.0.2:5984" works what should "172.17.0.2" be replaced with
The following is not correct
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.kino-couch.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.svc.cluster.local:5984
For StatefulSet you need to create a Headless service to be responsible for the network identity of the Pods proving stable DNS entries. Notice clusterIP: None in below example.
apiVersion: v1
kind: Service
metadata:
name: couch-service
labels:
app: kino-couch
spec:
ports:
- port: 5984
clusterIP: None
selector:
app: kino-couch
The statefulset need to refer to the above service in serviceName. So the statefulset yaml would look like below
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: couch-service
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
Then as a client you can access it using couch-service.<namespace>.svc.cluster.local:5984 to connect to a any of the CouchDB pods.
If you want to connect to a specific pod then use kino-couch-0.couch-service.<namespace>.svc.cluster.local:5984. This is typically needed for connecting the couchDB pods between themselves to create a cluster.

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.

How to expose an "election-based master and secondaries" service outside Kubernetes cluster?

I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.