As part of a migration from a legacy service discovery framework to kube/CoreDNS I would like to create a Service that knows how auto-publish Endpoints, but also have manually created endpoints.
Essentially I think I would like the following:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
annotations:
transition: legacy
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9376
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-discovery-backend
labels:
app: MyApp
spec:
replicas: 1
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
As the docs imply, though, explicitly setting things up like this results in there only being one Endpoints object associated with the service -- whether it's the service's auto-created one or my manually specified one seems to vary.
Is the most reasonable way to use CoreDNS as service discovery for both services that know how to self-publish and external services to manually control endpoints until we are 100% migrated, and then just switch over to a selector-based approach?
It can't be done using just Kubernetes Service, but I see one workaround here.
I believe, you can use additional Service backed by Nginx Pod (or deployment), manually configured (by config-map) as a reverse proxy for two backends: Kubernetes Service name (with selector, for new backend) and a manual endpoint (fqdn or IP, for legacy backend).
ingress -> Service1 -> Nginx--> Service2 -> Deployments/Pods
|
---> Endpoint -> Legacy Servers
It's up to you how to balance traffic to between them.
Related
I have one single node. On this node, I have 2 applications with multi instance (3 pods each)
App A want contact App B.
My issue is : App A contact all the time the same pod of App B.
I would like alternate pod (load balancing in round robin for example)
For example :
first request: AppPod3 respond
second request: AppPod1 respond
third request: AppPod2 respond
How i can do that ?
Thank you so much for your help ...
You can see below my conf of app B
I have tried to set timeoutSeconds for sessionaffinity but it's not working ...
kind: Deployment
metadata:
name: AppB
spec:
selector:
matchLabels:
app: AppB
replicas: 3
template:
metadata:
labels:
app: AppB
spec:
containers:
- name: AppB-container
image: image
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: AppB
labels:
app: svc-AppB
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: AppB
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1```
The service should use backend pods in round robin fashion by default. You don't need sessionAffinity settings if the pods are stateless; otherwise you will be redirected to the same pod based on the source ID.
Maybe you can add logging to the pods and observe when they are accessed. Subsequent calls to the service should be redirected to pods in round robin fashion with minimal service configuration.
Update: this is the deployment I am using. It balances the pods as expected; each curl request sent to the service clusterip:port ends on a different pod. My k8s installation is on premise, v1.18.3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: baseDeployment
labels:
app: baseApp
namespace: nm-app1
spec:
replicas: 2
selector:
matchLabels:
app: baseApp
template:
metadata:
labels:
app: baseApp
spec:
containers:
- name: baseApp
image: local-registry:5000/baseApp
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: baseService
namespace: nm-app1
spec:
selector:
app: baseApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
#user3683760 you will have to apply load balancing strategies in your deployment. Kube has bunch of choices for balancing the traffic between your services. I would suggest play around with few patterns and see what best suit your needs.
Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.
For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.
When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the Creating ingress status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.
Has anyone has a similar issue or any suggestions on how to debug?
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
Describing ingress shows no events:
mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.
The problem in this case was I had did not include the addon HttpLoadBalancing when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.
Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including HttpLoadBalancing that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.
To complete the accepted answer, it's worth noting that it's possible to activate the addon on an already existing cluster (from the Google console).
However, it will restart your cluster with downtime (in my case, it took several minutes on an almost empty cluster). Be sure that's acceptable in your case, and do tests.
I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.