I have deployed a mongodb replicaset (v6.0.2) using this chart: bitnami/mongodb 13.3.1
I have enabled external access with SVC type LoadBalancer.
I have my DNS records pointing to each external SVC IP.
mongodb-0.dev -> 10.246.50.1
mongodb-1.dev -> 10.246.50.2
mongodb-2.dev -> 10.246.50.3
kubectl get pod,svc -owide
NAME READY STATUS
pod/mongodb-0 2/2 Running
pod/mongodb-1 2/2 Running
pod/mongodb-2 2/2 Running
pod/mongodb-arbiter-0 2/2 Running
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/mongodb-0-external LoadBalancer 10.43.182.101 10.246.50.1 27017:30727/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-0
service/mongodb-1-external LoadBalancer 10.43.130.10 10.246.50.2 27017:30137/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-1
service/mongodb-2-external LoadBalancer 10.43.86.149 10.246.50.3 27017:32246/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-2
service/mongodb-arbiter-headless ClusterIP None <none> 27017/TCP 8m51s app.kubernetes.io/component=arbiter,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
service/mongodb-headless ClusterIP None <none> 27017/TCP 8m51s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
I run a docker from OUTSIDE to test the client:
docker run --rm -it -v docker.io/bitnami/mongodb:6.0.2-debian-11-r1 bash
I CAN connect from OUTSIDE using this command:
mongosh mongodb://mongodb-0.dev:27017 --authenticationDatabase admin -u root -p root1234
Connecting to: mongodb://<credentials>#mongodb-0.dev:27017/?directConnection=true&authSource=admin
rs0 [direct: primary] test>
I CAN'T connect using this one:
mongosh mongodb://mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017?replicaSet=rs0 --authenticationDatabase admin -u root -p root1234
Connecting to: mongodb://<credentials>#mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.6.0
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-0.mongodb-headless.mongodb.svc.cluster.local
Of course, If I add the DNS records it works (pay attention at the prompt)::
mongodb-0.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.1
mongodb-1.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.2
mongodb-2.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.3
Connecting to: mongodb://<credentials>#mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.6.0
rs0 [primary] test>
But I don't want to do that workaround at DNS level, of course it's wrong. I also don't want to harcode IPs on /etc/hosts
Extra TIP:
From INSIDE the k8s cluster this works:
kubectl run mongo-client-6 --rm -ti --image=docker.io/bitnami/mongodb:6.0.2-debian-11-r1 -- bash
mongosh --host mongodb-headless --authenticationDatabase admin -u root -p root1234
Summary: I want to connect using the SECOND method (replica set from OUTSIDE) Any help?
I have the same issue I believe the core reason is that the internal host entries for replicas are cluster-based not external URLs so when you access the server from outside the internal logic of replicas still using the internal headless service.
I am still trying to solve this issue from my side but still.
Related
I have to take the pg_basebackup of my postgres cluster. So how I can take the backup of this cluster.
I use the Bitnami Helm chart for PostgreSQL installation. Following Link for reference
https://phoenixnap.com/kb/postgresql-kubernetes
from that 2nd part. Deploy PostgreSQL by Creating Configuration from Scratch.
After successful installation
I tried to take pg_basebackup from localhost
from Localhost execution-->
pg_basebackup -h localhost -U postgres -D /root/base_bkp -v
pg_basebackup: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2023-01-03 07:16:42.889 GMT [1005845] FATAL: no pg_hba.conf entry for replication connection from host "10.XX.XXX.X", user "postgres", SSL off
2023-01-03 07:16:42.889 GMT [1005845] DETAIL: Client IP address resolved to "10-XX-XXX-X.postgres-postgresql-ha-pgpool.postgres.svc.cluster.local", forward lookup not checked.
host "10.XX.XXX.X" --> This is the pgpool node ip
root#hostname~a# kubectl get pods -n postgres -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
postgres-postgresql-ha-pgpool-6XXXXXX-fXXXX 1/1 Running 0 1d7h 10.XX.XXX.X <none> <none>
postgres-postgresql-ha-postgresql-0 1/1 Running 0 1d7h 10.XX.YYY.Y <none> <none>
postgres-postgresql-ha-postgresql-1 1/1 Running 0 1d7h 10.XX.ZZZ.Z <none> <none>
postgres-postgresql-ha-postgresql-2 1/1 Running 0 1d7h 10.XX.PPP.P <none> <none>
Can anyone help me that from where I need to initiate the backup and how?
I run a local kubernetes cluster (Minikube) and I try to connect pgAdmin to postgresql, bot run in Kubernetes.
What would be the connection string? Shall I access by service ip address or by service name?
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbpostgresql NodePort 10.103.252.31 <none> 5432:30201/TCP 19m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d21h
pgadmin-service NodePort 10.109.58.168 <none> 80:30200/TCP 40h
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
pgadmin-ingress <none> * 192.168.49.2 80 40h
kubectl get pod
NAME READY STATUS RESTARTS AGE
pgadmin-5569ddf4dd-49r8f 1/1 Running 1 40h
postgres-78f4b5db97-2ngck 1/1 Running 0 23m
I have tried with 10.103.252.31:30201 but without success.
Inside the cluster, services can refer to each other by DNS based on Service object names. So in this case you would use dbpostgresql or dbpostgresql.default.svc.cluster.local as the hostname.
Remember minikube is running inside its' own container, the NodePort clusterIPs you're getting back are open inside of minikube. So to get minikube's resolution of port and ip, run: minikube service <your-service-name> --url
This will return something like http://127.0.0.1:50946 which you can use to create an external DB connection.
Another option would be to use kubectl to forward a local port to the service running on localhost ex. kubectl port-forward service/django-service 8080:80
I deployed a Kubernetes (v1.17.5) cluster on OpenStack instances using Kubespray. Those instances are CentOS 7.6.1811 qcow2 images imported in Glance.
The install was successful, and I can see my nodes and pods with kubectl commands.
I used the deploy_netchecker option to deploy NetChecker and test the network within my cluster, and set network_plugin="flannel".
I also tried kube_proxy_mode="iptables", but it doesn't seem to affect the result.
That's pretty much all the changes I did in the k8s-cluster.yml file.
All the pods are running, services too :
[centos#cl1-master-0 ~]$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 46h
default netchecker-service NodePort 10.233.13.213 <none> 8081:31081/TCP 46h
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 46h
kube-system dashboard-metrics-scraper ClusterIP 10.233.59.12 <none> 8000/TCP 46h
kube-system kubernetes-dashboard ClusterIP 10.233.63.20 <none> 443/TCP 46h
But netchecker API gives the following answer :
[root#localhost ~]# curl http://X.X.X.X:31081/api/v1/connectivity_check
{"Message":"Connectivity check fails. Reason: there are absent or outdated pods; look up the payload","Absent":["netchecker-agent-hostnet-kk56x","netchecker-agent-hostnet-klldn","netchecker-agent-hostnet-r2vqs","netchecker-agent-hostnet-wqhjs"],"Outdated":["netchecker-agent-4jsgf","netchecker-agent-c9pcf","netchecker-agent-hostnet-jzbfv","netchecker-agent-vxgpf"]}
For an unknown reason, I cannot access the API from a cluster node with localhost, so I used a floating IP with OpenStack.
Here are some logs from the agent :
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-vjnwl_d8290268-3ea4-4e3c-acb4-295ab162a735/netchecker-agent/0.log
{"log":"I0701 13:04:01.814246 1 agent.go:135] Response status code: 200\n","stream":"stderr","time":"2020-07-01T13:04:01.81437579Z"}
{"log":"I0701 13:04:01.814272 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:04:01.814393199Z"}
{"log":"I0701 13:04:16.817398 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-vjnwl\n","stream":"stderr","time":"2020-07-01T13:04:16.817786735Z"}
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-hostnet-klldn_d5fa6e72-885f-44e1-97a6-880a25e6d6d6/netchecker-agent/0.log
{"log":"E0701 13:05:22.804428 1 agent.go:133] Error while sending info. Details: Post http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn: dial tcp 10.233.13.213:8081: i/o timeout\n","stream":"stderr","time":"2020-07-01T13:05:22.805138032Z"}
{"log":"I0701 13:05:22.804474 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:05:22.805190295Z"}
{"log":"I0701 13:05:37.807140 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn\n","stream":"stderr","time":"2020-07-01T13:05:37.807309111Z"}
Logs from the server do not indicate any error.
I tried to check DNS resolve with the following :
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- /bin/sh
/ $ nslookup kubernetes.default
Server: 169.254.25.10
Address 1: 169.254.25.10
nslookup: can't resolve 'kubernetes.default'
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
options ndots:5
169.254.25.10 is the IP of the nodelocaldns, but it doesn't seem to query the coredns service deployed.
When I use nslookup netchecker-service.default.svc.cluster.local 10.233.0.3, with the coredns IP, I get a correct answer.
What can be wrong with my configuration ?
Thanks in advance
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
I newbie question related with k8s. I've just installed a k3d cluster.
I've deployed an this helm chart:
$ helm install stable/docker-registry
It's been installed and pod is running correctly.
Nevertheless, I don't quite figure out how to get access to this just deployed service.
According to documentation, it's listening on 5000 port, and is using a ClusterIP. A service is also deployed.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 42h
docker-registry-1580212517 ClusterIP 10.43.80.185 <none> 5000/TCP 19m
EDIT
I've been able to say to chard creates an ingress:
$ kubectl get ingresses.networking.k8s.io -n default
NAME HOSTS ADDRESS PORTS AGE
docker-registry-1580214408 chart-example.local 172.20.0.4 80 10m
Nevertheless, I'm still without being able tp push images to registry:
$ docker push 172.20.0.4/feedly:v1
The push refers to repository [172.20.0.4/feedly]
Get https://172.20.0.4/v2/: x509: certificate has expired or is not yet valid
Since the service type is ClusterIP, you can't access the service from host system. You can run below command to access the service from your host system.
kubectl port-forward --address 0.0.0.0 svc/docker-registry-1580212517 5000:5000 &
curl <host IP/name>:5000
I create a one-replica zookeeper + kafka cluster with the official kafka chart from the official incubator repo:
helm install --name mykafka -f kafka.yaml incubator/kafka
This gives me two pods:
kubectl get pods
NAME READY STATUS
mykafka-kafka-0 1/1 Running
mykafka-zookeeper-0 1/1 Running
And four services (in addition to the default kubernetes service)
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
mykafka-kafka ClusterIP 10.108.143.59 <none> 9092/TCP
mykafka-kafka-headless ClusterIP None <none> 9092/TCP
mykafka-zookeeper ClusterIP 10.109.43.48 <none> 2181/TCP
mykafka-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP
If I shell into the zookeeper pod:
> kubectl exec -it mykafka-zookeeper-0 -- /bin/bash
I use the curl tool to test TCP connectivity. I expect a communications error as the server isn't using HTTP, but if curl can't even connect and I have to ctrl-C out, then the TCP connection isn't working.
I can access the local pod through curl localhost:2181:
root#mykafka-zookeeper-0:/# curl localhost:2181
curl: (52) Empty reply from server
I can access other pod through curl mykafka-kafka:9092:
root#mykafka-zookeeper-0:/# curl mykafka-kafka:9092
curl: (56) Recv failure: Connection reset by peer
But I can't access mykafka-zookeeper:2181. That name resolves to the cluster IP, but the attempt to TCP connect hangs until I ctrl-C:
root#mykafka-zookeeper-0:/# curl -v mykafka-zookeeper:2181
* Rebuilt URL to: mykafka-zookeeper:2181/
* Trying 10.109.43.48...
^C
Similarly, I can shell into the kafka pod:
> kubectl exec -it mykafka-kafka-0 -- /bin/bash
Connecting to the Zookeeper pod by the service name works fine:
root#mykafka-kafka-0:/# curl mykafka-zookeeper:2181
curl: (52) Empty reply from server
Connecting to localhost kafka works fine:
root#mykafka-kafka-0:/# curl localhost:9092
curl: (56) Recv failure: Connection reset by peer
But connecting to the Kafka pod by the service name doesn't work and I must ctrl-C the curl attempt:
curl -v mykafka-kafka:9092
* Rebuilt URL to: mykafka-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.108.143.59...
^C
Can anyone explain why using I can only connect to a Kubernetes service from outside the service and not from within the service?
I believe what you're experiencing can be resolved by looking at how your kubelet is set up to run. There is a setting you can toggle when starting up the kubelet called --hairpin-mode. By default this setting is set to the string promiscuous, where a pod can't connect to its own service, but you can change it to be hairpin-veth, which would allow a pod to connect to its own service.
There are a few issues on the topic, but this seems to be referenced the most:
https://github.com/kubernetes/kubernetes/issues/45790