Setup
I have a federated k8s cluster that each cluster has master and workers.
In a federation, each cluster has a different domain for accessing image registry. (e.g. myregistry-1, myregistry-2).
In other words, each cluster has its own registry.
Question
I don't want to change domain for each cluster. Basically, I would like to create a common endpoint that matches to each inner registry, which is internal to that cluster.
Example: Below deployment on all clusters.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.default:5000/nginx:1.14.2
ports:
- containerPort: 80
I tried to implement "Services without selectors" and created an endpoint and updated deployment.yaml but didn't work.
harbor.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-service
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
harbor-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-service
subsets:
- addresses:
- ip: <INTERNAL_IP_OF_REGISTRY>
ports:
- port: 5000
Related
I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.
I need to route traffic (real-time audio/video) directly into specific container of pods. The number of pods should be scaled horizontally with a replica set. My solution now is to create a StatefulSet with as many NodePort-type services as there are pods.
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: foobar
name: foobar-app
spec:
serviceName: foobar
replicas: 2
selector:
matchLabels:
app: foobar
template:
metadata:
labels:
app: foobar
spec:
containers:
- image: foobar:latest
name: foobar
---
apiVersion: v1
kind: Service
metadata:
name: foobar-service-0
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: foobar-app-0
ports:
- protocol: TCP
nodePort: 30036
port: 3000
---
apiVersion: v1
kind: Service
metadata:
name: foobar-service-1
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: foobar-app-1
ports:
- protocol: TCP
nodePort: 30037
port: 3000
Is this considered an acceptable solution or is there a better one for creating services for each pod?
As explained in the comments above I found the solution provided here by using NodePort service targeting a StatefulSet with externalTrafficPolicy=Local. This disables the cluster wide load balancing between different nodes. The prerequisite is that only one pod of the stateful set may run per node, which can be achieved by setting pod anti-affinity.
Given the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: server
selector:
app: nginx
How would one configure the Service and Deployment here (or if needed, an Ingress object) so that when a Pod takes more than n seconds to return a HTTP response, the Service will try the request on another nginx-deployment Pod?
Kubernetes Services are based on simple iptables rules.
Traffic is NAT'ed only to destination pod. There are no layers you can adjust, for example, timeouts and set quality of services based on it.
I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.