Kubernetes load balancing rabbitmq on digital ocean - kubernetes

I need to be able to expose my rabbitmq instance periodically to the outside world.
It's running on DigitalOcean in a kuberentes 1.16 cluster with a bunch of other services. One of the services is a web server. The load balancer on that works just fine. When I try and use the same config (with different ports obviously) for my rabbitmq, I can't get it to work.
The other services within the cluster can talk to the rabbitmq just fine. I can too, if I kubectl port-forward service/rabbitmq 5672 15672 15671 and access it locally.
If I try and access it on its public IP, the connection gets dropped instantly.
$ telnet 64.225.xx.xx 15672
Trying 64.225.xx.xx...
Connected to 64.225.xx.xx.
Escape character is '^]'.
Connection closed by foreign host.
The config in its entirety:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: http
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq-deployment
labels:
db: rabbitmq
spec:
selector:
matchLabels:
db: rabbitmq
replicas: 1
template:
metadata:
labels:
db: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
ports:
- containerPort: 15671
- containerPort: 15672
- containerPort: 5672
env:
- name: GET_HOSTS_FROM
value: dns
- name: RABBITMQ_DEFAULT_USER
value: "***"
- name: RABBITMQ_DEFAULT_PASS
value: "***"
- name: RABBITMQ_DEFAULT_VHOST
value: "/"

So for whatever reason (am I labeling these wrong) I had success making the external config be its own service. In other words, this setup works:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq-svc
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: '15672'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-external
labels:
svc: rabbitmq-external
spec:
ports:
- port: 15672
targetPort: 15672
name: 'http'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
protocol: TCP
selector:
db: rabbitmq
type: LoadBalancer
---
...
Not sure why though.

Related

How to connect kubernetes deployment having multiple containers to multiple service ports of a single service?

I have a scenario like:
Have a single deployment containing two containers and have different ports like:
template: {
spec: {
containers: [
{
name: container1,
image: image1,
command: [...],
args: [...],
imagePullPolicy: IfNotPresent,
ports: [
{
name: port1,
containerPort: 80,
},
],
.............
},
{
name: container2,
image: image1,
command: [...],
args: [...],
imagePullPolicy: IfNotPresent,
ports: [
{
name: port2,
containerPort: 81,
},
],
------------
}
]
}
}
A service having multiple ports pointing to those containers like:
spec: {
type: ClusterIP,
ports: [
{
port: 7000,
targetPort: 80,
protocol: 'TCP',
name: port1,
},
{
port: 7001,
targetPort: 81,
protocol: 'TCP',
name: port2,
}
]
}
The problem I am facing is I can connect to the container having port 80 using service name and port 7000 but I can't connect to the container having port 81 using service name and port 7001. Did I miss anything here?
Also, note that both containers have identical images having different command and args for the internal logic.
You can use two services or one service with two exposed ports
you can try 2 services :
with the deployment like this :
spec:
containers:
- name: container1
image:image1
ports:
- containerPort: 8080
- name: container2
image: image1
ports:
- containerPort: 8081
and the services :
kind: Service
apiVersion: v1
metadata:
name: container1
annotations:
version: v1.0
spec:
selector:
component: <deployment>
ports:
- name: container1
port: 8080
targetPort: 8080
type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
name: container2
annotations:
version: v1.0
spec:
selector:
component: <deployment>
ports:
- name: container2
port: 8080
targetPort: 8081
type: ClusterIP
I am trying to re-produce this using kind and I can't. Here is my cluster config file:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: 127.0.0.1
- containerPort: 30001
hostPort: 30001
listenAddress: 127.0.0.1
then issue to create the cluster: kind create cluster --config cluster.yaml
Then I have a sample deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-and-wiremock
spec:
selector:
matchLabels:
app: nginx-and-wiremock
replicas: 1
template:
metadata:
labels:
app: nginx-and-wiremock
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: wiremock
image: wiremock/wiremock:2.32.0
ports:
- containerPort: 8080
I use nginx and wiremock images, that expose different ports: 80 and 8080. Deploy this one via : kubectl apply -f nginx-wiremock-deployment.yaml.
I also deploy a service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-and-wiremock
name: nginx-and-wiremock
spec:
ports:
- name: nginx
nodePort: 30000
targetPort: 80
port: 80
- name: wiremock
nodePort: 30001
targetPort: 8080
port: 8080
selector:
app: nginx-and-wiremock
type: NodePort
Once this is up and running:
curl localhost:30000
curl localhost:30001/__admin/mappings
Both of the respond just fine.

How to open custom port in Kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

Cannot reach bind dns in Kubernetes

I am trying to install a DNS Server inside a local Kubernetes cluster using microK8S, but I cannot reach DNS.
Here deployments script:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: bind
labels:
app: bind
spec:
replicas: 1
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
containers:
- name: bind
image: sameersbn/bind
env:
- name: ROOT_PASSWORD
value: "toto"
volumeMounts:
- mountPath: /data
name: data
ports:
- containerPort: 53
protocol: UDP
- containerPort: 53
protocol: TCP
- containerPort: 10000
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
name: bind
service is expose with ip
bind-dns LoadBalancer 10.152.183.144 <pending> 53/UDP,53/TCP 11m
When I ssh into bind pod it works
host www.google.com 0.0.0.0
Using domain server:
Name: 0.0.0.0
Address: 0.0.0.0#53
Aliases:
www.google.com has address 172.217.13.132
www.google.com has IPv6 address 2607:f8b0:4020:805::2004
But outside container it does not
host www.google.com 10.152.183.144
;; connection timed out; no servers could be reached
What is wrong ? Why I cannot reach server ?
Service resource spec.selector need to specify pod spec.metadata.labels.
So I think you need to change the Service resource of the yaml file.
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
app: bind # changed

Artemis HA with Kubernetes: UnknowHostException from other hosts

I have an Artemis/JMS service in Kubernetes that i want to deploy in a 2 node cluster.
Here is my connector config for Artemis (broker.xml):
<connectors>
<connector name="jms-service-0">tcp://jms-service-0.jms-service.default.svc.cluster.local:61616</connector>
<connector name="jms-service-1">tcp://jms-service-1.jms-service.default.svc.cluster.local:61616</connector>
</connectors>
But when deploying in kubernetes 1.8 with this StatefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jms-service
labels:
app: jms-service
spec:
serviceName: jms-service
replicas: 2
selector:
matchLabels:
app: jms-service
template:
metadata:
labels:
app: jms-service
spec:
containers:
- name: jms-service
image: kube-registry:5000/tk/jms-service:2.4
ports:
- containerPort: 8161
- containerPort: 61616
- containerPort: 5445
- containerPort: 5672
- containerPort: 1883
- containerPort: 61613
env:
- name: ARTEMIS_USERNAME
value: admin
- name: ARTEMIS_PASSWORD
value: admin
And this Service:
apiVersion: v1
kind: Service
metadata:
name: jms-service
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 8161
nodePort: 30001
name: webserver
- port: 61616
nodePort: 30002
name: core
- port: 5445
nodePort: 30003
name: hornetq
- port: 5672
nodePort: 30004
name: amqp
- port: 1883
nodePort: 30005
name: mqtt
- port: 61613
nodePort: 30006
name: stomp
selector:
app: jms-service
type:
NodePort
Each pod doesn't see the other on start up.
For jms-service-0:
08:06:30,811 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-1.jms-service.default.svc.cluster.local
at java.net.InetAddress.getAllByName0(InetAddress.java:1280) [rt.ja
And for jms-service-1:
08:06:34,703 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-0.jms-service.default.svc.cluster.local
I think it's because until the pods are ready the DNS are not visible, but i'm not sure.
How can I solve this?
the service, in your case jms-service, definition is incorrect. the service that binds the backend pods ( from statefulsets ) should be defined as headless service

How to expose multiple port using a load balancer services in Kubernetes

I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.