How to expose multiple port using a load balancer services in Kubernetes - kubernetes

I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?

You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

How to open custom port in Kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

Kubernetes load balancing rabbitmq on digital ocean

I need to be able to expose my rabbitmq instance periodically to the outside world.
It's running on DigitalOcean in a kuberentes 1.16 cluster with a bunch of other services. One of the services is a web server. The load balancer on that works just fine. When I try and use the same config (with different ports obviously) for my rabbitmq, I can't get it to work.
The other services within the cluster can talk to the rabbitmq just fine. I can too, if I kubectl port-forward service/rabbitmq 5672 15672 15671 and access it locally.
If I try and access it on its public IP, the connection gets dropped instantly.
$ telnet 64.225.xx.xx 15672
Trying 64.225.xx.xx...
Connected to 64.225.xx.xx.
Escape character is '^]'.
Connection closed by foreign host.
The config in its entirety:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: http
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq-deployment
labels:
db: rabbitmq
spec:
selector:
matchLabels:
db: rabbitmq
replicas: 1
template:
metadata:
labels:
db: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
ports:
- containerPort: 15671
- containerPort: 15672
- containerPort: 5672
env:
- name: GET_HOSTS_FROM
value: dns
- name: RABBITMQ_DEFAULT_USER
value: "***"
- name: RABBITMQ_DEFAULT_PASS
value: "***"
- name: RABBITMQ_DEFAULT_VHOST
value: "/"
So for whatever reason (am I labeling these wrong) I had success making the external config be its own service. In other words, this setup works:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq-svc
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: '15672'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-external
labels:
svc: rabbitmq-external
spec:
ports:
- port: 15672
targetPort: 15672
name: 'http'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
protocol: TCP
selector:
db: rabbitmq
type: LoadBalancer
---
...
Not sure why though.

Cannot reach bind dns in Kubernetes

I am trying to install a DNS Server inside a local Kubernetes cluster using microK8S, but I cannot reach DNS.
Here deployments script:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: bind
labels:
app: bind
spec:
replicas: 1
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
containers:
- name: bind
image: sameersbn/bind
env:
- name: ROOT_PASSWORD
value: "toto"
volumeMounts:
- mountPath: /data
name: data
ports:
- containerPort: 53
protocol: UDP
- containerPort: 53
protocol: TCP
- containerPort: 10000
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
name: bind
service is expose with ip
bind-dns LoadBalancer 10.152.183.144 <pending> 53/UDP,53/TCP 11m
When I ssh into bind pod it works
host www.google.com 0.0.0.0
Using domain server:
Name: 0.0.0.0
Address: 0.0.0.0#53
Aliases:
www.google.com has address 172.217.13.132
www.google.com has IPv6 address 2607:f8b0:4020:805::2004
But outside container it does not
host www.google.com 10.152.183.144
;; connection timed out; no servers could be reached
What is wrong ? Why I cannot reach server ?
Service resource spec.selector need to specify pod spec.metadata.labels.
So I think you need to change the Service resource of the yaml file.
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
app: bind # changed

cannot expose port using kubernetes service

My objective: To expose a pod's(running angular image) port so that I can access it from the host machine's browser.
service.yml:
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 8000
targetPort: 4200
Pod's yml:
apiVersion: v1
kind: Pod
metadata:
name: angular.frontend
labels:
app: MyApp
spec:
containers:
- name: angular-frontend-demo
image: angular-frontend-image
ports:
- name: nodejs-port
containerPort: 4200
Weird thing is that doing kubectl port-forward pod/angular.frontend 8000:4200 works. However, my objective is to write that in service.yml
Use Nodeport:
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- protocol: TCP
port: 8000
targetPort: 4200
nodePort: 30001
then you can access the service on nodeport 30001 on any node of the cluster.
For example the machine name is node01 , you can then do curl http://node01:30001
The service you've defined here is of type ClusterIP (since you haven't set a type in the spec). This means the service is only available and reachable within the cluster. You can use Ingress to make it accessible from outside the cluster, see for example this post showing how to do that for Minikube.