Cannot reach bind dns in Kubernetes - kubernetes

I am trying to install a DNS Server inside a local Kubernetes cluster using microK8S, but I cannot reach DNS.
Here deployments script:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: bind
labels:
app: bind
spec:
replicas: 1
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
containers:
- name: bind
image: sameersbn/bind
env:
- name: ROOT_PASSWORD
value: "toto"
volumeMounts:
- mountPath: /data
name: data
ports:
- containerPort: 53
protocol: UDP
- containerPort: 53
protocol: TCP
- containerPort: 10000
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
name: bind
service is expose with ip
bind-dns LoadBalancer 10.152.183.144 <pending> 53/UDP,53/TCP 11m
When I ssh into bind pod it works
host www.google.com 0.0.0.0
Using domain server:
Name: 0.0.0.0
Address: 0.0.0.0#53
Aliases:
www.google.com has address 172.217.13.132
www.google.com has IPv6 address 2607:f8b0:4020:805::2004
But outside container it does not
host www.google.com 10.152.183.144
;; connection timed out; no servers could be reached
What is wrong ? Why I cannot reach server ?

Service resource spec.selector need to specify pod spec.metadata.labels.
So I think you need to change the Service resource of the yaml file.
apiVersion: v1
kind: Service
metadata:
name: bind-dns
labels:
name: bind-dns
spec:
type: ClusterIP
ports:
- name: dns
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
app: bind # changed

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

Kubernetes load balancing rabbitmq on digital ocean

I need to be able to expose my rabbitmq instance periodically to the outside world.
It's running on DigitalOcean in a kuberentes 1.16 cluster with a bunch of other services. One of the services is a web server. The load balancer on that works just fine. When I try and use the same config (with different ports obviously) for my rabbitmq, I can't get it to work.
The other services within the cluster can talk to the rabbitmq just fine. I can too, if I kubectl port-forward service/rabbitmq 5672 15672 15671 and access it locally.
If I try and access it on its public IP, the connection gets dropped instantly.
$ telnet 64.225.xx.xx 15672
Trying 64.225.xx.xx...
Connected to 64.225.xx.xx.
Escape character is '^]'.
Connection closed by foreign host.
The config in its entirety:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: http
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq-deployment
labels:
db: rabbitmq
spec:
selector:
matchLabels:
db: rabbitmq
replicas: 1
template:
metadata:
labels:
db: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
ports:
- containerPort: 15671
- containerPort: 15672
- containerPort: 5672
env:
- name: GET_HOSTS_FROM
value: dns
- name: RABBITMQ_DEFAULT_USER
value: "***"
- name: RABBITMQ_DEFAULT_PASS
value: "***"
- name: RABBITMQ_DEFAULT_VHOST
value: "/"
So for whatever reason (am I labeling these wrong) I had success making the external config be its own service. In other words, this setup works:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
db: rabbitmq-svc
spec:
ports:
- port: 15671
targetPort: 15671
name: '15671'
- port: 15672
targetPort: 15672
name: '15672'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
selector:
db: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-external
labels:
svc: rabbitmq-external
spec:
ports:
- port: 15672
targetPort: 15672
name: 'http'
protocol: TCP
- port: 5672
targetPort: 5672
name: '5672'
protocol: TCP
selector:
db: rabbitmq
type: LoadBalancer
---
...
Not sure why though.

Artemis HA with Kubernetes: UnknowHostException from other hosts

I have an Artemis/JMS service in Kubernetes that i want to deploy in a 2 node cluster.
Here is my connector config for Artemis (broker.xml):
<connectors>
<connector name="jms-service-0">tcp://jms-service-0.jms-service.default.svc.cluster.local:61616</connector>
<connector name="jms-service-1">tcp://jms-service-1.jms-service.default.svc.cluster.local:61616</connector>
</connectors>
But when deploying in kubernetes 1.8 with this StatefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jms-service
labels:
app: jms-service
spec:
serviceName: jms-service
replicas: 2
selector:
matchLabels:
app: jms-service
template:
metadata:
labels:
app: jms-service
spec:
containers:
- name: jms-service
image: kube-registry:5000/tk/jms-service:2.4
ports:
- containerPort: 8161
- containerPort: 61616
- containerPort: 5445
- containerPort: 5672
- containerPort: 1883
- containerPort: 61613
env:
- name: ARTEMIS_USERNAME
value: admin
- name: ARTEMIS_PASSWORD
value: admin
And this Service:
apiVersion: v1
kind: Service
metadata:
name: jms-service
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 8161
nodePort: 30001
name: webserver
- port: 61616
nodePort: 30002
name: core
- port: 5445
nodePort: 30003
name: hornetq
- port: 5672
nodePort: 30004
name: amqp
- port: 1883
nodePort: 30005
name: mqtt
- port: 61613
nodePort: 30006
name: stomp
selector:
app: jms-service
type:
NodePort
Each pod doesn't see the other on start up.
For jms-service-0:
08:06:30,811 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-1.jms-service.default.svc.cluster.local
at java.net.InetAddress.getAllByName0(InetAddress.java:1280) [rt.ja
And for jms-service-1:
08:06:34,703 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-0.jms-service.default.svc.cluster.local
I think it's because until the pods are ready the DNS are not visible, but i'm not sure.
How can I solve this?
the service, in your case jms-service, definition is incorrect. the service that binds the backend pods ( from statefulsets ) should be defined as headless service

Connection timedout when attempting to access any service in kubernetes

I've create a deployment and a service and deployed them using kubernetes, and when i tried to access them by curl, always i got a connection timed out error.
Here's my yml files:
Deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: locations-service
template:
metadata:
labels:
app: locations-service
spec:
containers:
- image: dropwizard:latest
imagePullPolicy: Never # just for testing!
name: locations-service
ports:
- containerPort: 8080
protocol: TCP
name: app-port
- containerPort: 8081
protocol: TCP
name: admin-port
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
Service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: "8080"
port: 8080
targetPort: 8080
protocol: TCP
- name: "8081"
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: locations-service
Also i tried to add ingress routes, and tried to hit them but still the same problem occur.
Note that the application is successfully deployed, and i can check the logs from k8s dashboard
Another example, i have the following svc
kubectl describe service webapp1-svc
Name: webapp1-svc
Namespace: default
Labels: app=webapp1
Annotations: <none>
Selector: app=webapp1
Type: NodePort
IP: 10.0.0.219
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and tried to access it using
curl -v 10.0.0.219:30080
* Rebuilt URL to: 10.0.0.219:30080/
* Trying 10.0.0.219...
* connect to 10.0.0.219 port 30080 failed: Connection timed out
* Failed to connect to 10.0.0.219 port 30080: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.0.0.219 port 30080: Connection timed out

How to expose multiple port using a load balancer services in Kubernetes

I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.