I have a Minikube Kubernetes cluster running a cockroachdb which looks like:
kubectl get pods
test-cockroachdb-0 1/1 Running 17 95m
test-cockroachdb-1 1/1 Running 190 2d
test-cockroachdb-2 1/1 Running 160 2d
test-cockroachdb-init-m8rzp 0/1 Completed 0 2d
cockroachdb-client-secure 1/1 Running 0 2d
I want to get a connection string that I can use in my application.
To verify my connection string, I am using the tool DBeaver.
My database name is configured to 'defaultdb' which exists on my cluster, and the user with the relevant password. The port is accurate as well (default cockroachdb minikube port).
However as to the certificate aspect of connecting I am at a loss. How do I generate/gather the certificates I need to successfully connect to my cluster? How do I connect to my cluster using DBeaver?
Edit:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/myname-cockroachdb-0 1/1 Running 27 156m
pod/myname-cockroachdb-1 1/1 Running 197 2d1h
pod/myname-cockroachdb-2 1/1 Running 167 2d1h
pod/myname-cockroachdb-init-m8rzp 0/1 Completed 0 2d1h
pod/myname-client-secure 1/1 Running 0 2d1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/myname-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 2d1h
service/myname-cockroachdb-public ClusterIP 10.xxx.xxx.xx <none> 26257/TCP,8080/TCP 2d1h
service/kubernetes ClusterIP 10.xx.0.1 <none> 443/TCP 2d1h
NAME READY AGE
statefulset.apps/myname-cockroachdb 3/3 2d1h
NAME COMPLETIONS DURATION AGE
job.batch/myname-cockroachdb-init 1/1 92s 2d1h
Like #FL3SH already said.
You can use kubectl port-forward <pod_name> <port>
This is nicely explained in Cockroach documentation Step 4. Access the Admin UI, please us it as example and set different ports.
As for the certificates:
As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
Get the name of the Pending CSR for the first pod:
kubectl get csr
NAME AGE REQUESTOR CONDITION
default.node.cockroachdb-0 1m system:serviceaccount:default:default Pending
node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 4m kubelet Approved,Issued
node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 4m kubelet Approved,Issued
node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 5m kubelet Approved,Issued
If you do not see a Pending CSR, wait a minute and try again.
You can check the CSR pod kubectl describe csr default.node.cockroachdb-0
It might look like this:
Name: default.node.cockroachdb-0
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
Requesting User: system:serviceaccount:default:default
Status: Pending
Subject:
Common Name: node
Serial Number:
Organization: Cockroach
Subject Alternative Names:
DNS Names: localhost
cockroachdb-0.cockroachdb.default.svc.cluster.local
cockroachdb-public
IP Addresses: 127.0.0.1
10.48.1.6
Events: <none>
If it does then you can approve the certificate using:
kubectl certificate approve default.node.cockroachdb-0
Please do follow the Orchestrate CockroachDB in a Single Kubernetes Cluster guide.
Let me know if you need any further help.
You can use kubectl port-forward service/myname-cockroachdb 26257 and in DBeaver just use localhost:26257 as a connection string.
Related
I have a daemonset configuration that runs on all nodes.
every pod listens on port 34567. I want from other pod on different node to communicate with this pod. how can I achieve that?
Find the target Pod's IP address as shown below
controlplane $ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-42pq8 1/1 Running 1 5m43s 10.88.0.4 node01 <none> <none>
coredns-fb8b8dccf-f9n5x 1/1 Running 1 5m43s 10.88.0.3 node01 <none> <none>
etcd-controlplane 1/1 Running 0 4m38s 172.17.0.23 controlplane <none> <none>
katacoda-cloud-provider-74dc75cf99-2jrpt 1/1 Running 3 5m42s 10.88.0.2 node01 <none> <none>
kube-apiserver-controlplane 1/1 Running 0 4m33s 172.17.0.23 controlplane <none> <none>
kube-controller-manager-controlplane 1/1 Running 0 4m45s 172.17.0.23 controlplane <none> <none>
kube-keepalived-vip-smkdc 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-8sxkt 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-jdcqc 1/1 Running 0 5m43s 172.17.0.23 controlplane <none> <none>
kube-scheduler-controlplane 1/1 Running 0 4m47s 172.17.0.23 controlplane <none> <none>
weave-net-8cxqg 2/2 Running 1 5m27s 172.17.0.26 node01 <none> <none>
weave-net-s4tcj 2/2 Running 1 5m43s 172.17.0.23 controlplane <none> <none>
Next "exec" into the originating pod - kube-proxy-8sxkt in my example
kubectl -n kube-system exec -it kube-proxy-8sxkt sh
Next, you will use the destination pod's IP and port (10256 - my example) number to connect. Please note that you may have to install curl/telnet if your originating container's image does not include the application
# curl telnet://172.17.0.23:10256
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
You can call via pod's IP.
Note: This IP can only be used in the k8s cluster.
POD address (IP) is a good option you can use it, unless you know the POD IP which might get changed from time to time due to deployment and scaling changes.
i would suggest trying out the Daemon set by exposing it using the service type Node port if you have a fix amount of Node and not much autoscaling there.
If you want to connect your POD with a specific POD you can use the Node IP on which POD is scheduled and use the Node port service.
Node IP:Node port
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
If you don't want to connect to a specific POD and just any of the Daemon sets replica will work to connect with you can use the service name to connect PODs with each other.
my-svc.my-namespace.svc.cluster-domain.example
Read more about the service and POD DNS
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I tried to deploy nginx server using kubernetes. I was able to create deployment and thn create service. But when i gave the curl command im facing an error. Im not able to curl and open nginx webpage in browser.
Below are the commands i used and error i got.
kubectl get pods
NAME READY STATUS RESTARTS AGE
curl 1/1 Running 8 15d
curl-deployment-646445496f-59fs9 1/1 Running 7 15d
hello-5d448ffc76-cwzcl 1/1 Running 13 23d
hello-node-7567d9fdc9-ffdkx 1/1 Running 8 20d
my-nginx-5b6fb7fb46-bdzdq 0/1 ContainerCreating 0 15d
mytestwebapp 1/1 Running 10 21d
nginx-6799fc88d8-w76cb 1/1 Running 5 13d
nginx-deployment-66b6c48dd5-9mkh8 1/1 Running 12 23d
nginx-test-795d659f45-d9shx 1/1 Running 4 13d
rss-site-7b6794856f-9586w 2/2 Running 40 15d
rss-site-7b6794856f-z59vn 2/2 Running 78 21d
jit#jit-Vostro-15-3568:~$ kubectl logs webserver
Error from server (NotFound): pods "webserver" not found
jit#jit-Vostro-15-3568:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.104.134.171 <pending> 8080:31733/TCP 13d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
my-nginx NodePort 10.103.114.92 <none> 8080:32563/TCP,443:32397/TCP 15d
nginx NodePort 10.110.113.60 <none> 80:30985/TCP 13d
nginx-test NodePort 10.109.16.192 <none> 8080:31913/TCP 13d
jit#jit-Vostro-15-3568:~$ curl kube-worker-1:30985
curl: (6) Could not resolve host: kube-worker-1
As you can see you have pod called nginx, that indicates that you have had nginx server already deployed in pod on your cluster. You don't have pod called webserver that's why you're getting
Error from server (NotFound): pods "webserver" not found error.
Also to access nginx service try to pass curl it via ip:port:
$ curl 10.110.113.60:30985
If you point a web browser to http://IP_OF_NODE:ASSIGNED_PORT (where IP_OF_NODE is an IP address of one of your nodes and ASSIGNED_PORT is the port assigned during the create service command), you should see the NGINX Welcome page!
Take a look: nginx-app-kubernetes.
I tried the above scenario locally.
do a kubectl describe svc <svc-name>
check whether it have any end-points.
probably it doesn't have any endpoints
I'm currently learning kubernetes and started to deploy ELK stack on a minikube cluster (running on a linux EC2 instance), though i was able to run all the objects successfully, I'm not able to access any of the tool from my windows browser, looking for some inputs on how to access all below exposed ports from my windows browser.
Cluster details:
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-deployment-5c7d5cb5fb-g55ft 1/1 Running 0 3m43s
pod/kibana-deployment-76d8744864-ddx4h 1/1 Running 0 3m43s
pod/logstash-deployment-56849fcd7b-bjlzf 1/1 Running 0 3m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-service ClusterIP XX.XX.XX.XX <none> 9200/TCP 3m43s
service/kibana-service ClusterIP XX.XX.XX.XX <none> 5601/TCP 3m43s
service/kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5m15s
service/logstash-service ClusterIP 10.XX.XX.XX <none> 9600/TCP,5044/TCP 3m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-deployment 1/1 1 1 3m43s
deployment.apps/kibana-deployment 1/1 1 1 3m43s
deployment.apps/logstash-deployment 1/1 1 1 3m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-deployment-5c7d5cb5fb 1 1 1 3m43s
replicaset.apps/kibana-deployment-76d8744864 1 1 1 3m43s
replicaset.apps/logstash-deployment-56849fcd7b 1 1 1 3m43s
Note: I also tried to run all the above services as NodePort and using the minikube ip i was able hit curl commands to check the status of the application, but still not able to access any of it via my browser
Generally if you want expose anything outside the cluster you need to user service type:
NodePort, LoadBalancer or use Ingress. If you will check Minikube documentaton, you will find that Minikube supports all those types.
If you thought about LoadBalancer, you can use minikube tunnel.
When you are using cloud environment and non standard ports, you should check firewall rules to check if port/traffic is open.
Regarding error from comment, it seems that you have issue with Kibana port 5601.
Did you check similar threads like this or this? If this won't be helpful, please provide Kibana configuration.
did you check just a normal port-forward instead of minikube ip, and expose. Those didnt work for me neither.
something like this may would help.
kubectl port-forward deployment/kibana-kibana 5601
Using Ceph v1.14.10, Rook v1.3.8 on k8s 1.16 on-premise. After 10 days without any trouble, we decided to drain some nodes, then, all moved pods cant attach to their PV any more, look like Ceph cluster is broken:
My ConfigMap rook-ceph-mon-endpoints is referencing 2 missing mon pod IPs:
csi-cluster-config-json: '[{"clusterID":"rook-ceph","monitors":["10.115.0.129:6789","10.115.0.4:6789","10.115.0.132:6789"]}]
But
kubectl -n rook-ceph get pod -l app=rook-ceph-mon -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-mon-e-56b849775-4g5wg 1/1 Running 0 6h42m 10.115.0.2 XXXX <none> <none>
rook-ceph-mon-h-fc486fb5c-8mvng 1/1 Running 0 6h42m 10.115.0.134 XXXX <none> <none>
rook-ceph-mon-i-65666fcff4-4ft49 1/1 Running 0 30h 10.115.0.132 XXXX <none> <none>
Is it normal or I must run a kind of "reconciliation" task to update the CM with new mon pod IPs ?
(could be related to https://github.com/rook/rook/issues/2262)
I had to manualy update:
secret rook-ceph-config
cm rook-ceph-mon-endpoints
cm rook-ceph-csi-config
As #travisn said:
The operator owns updating that configmap and secret. It's not expected to update them manually unless there is some disaster recovery situation as described at https://rook.github.io/docs/rook/v1.4/ceph-disaster-recovery.html.
kubernetes v1.15.0 master is not able to reach pod ip address. I have been able to get it working till 1.14 but this time its not working any more. I have been using and setting up k8s clustors in ec2 using kubeadm.
Please find a log below; Any comments.
[ec2-user#ip-172-31-18-31 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-16-120.ap-south-1.compute.internal Ready <none> 97m v1.15.0
ip-172-31-18-31.ap-south-1.compute.internal Ready master 116m v1.15.0
[ec2-user#ip-172-31-18-31 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-deploy-7fd5fc7ff-dh9pw 1/1 Running 0 6m32s 10.44.0.3 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-deploy-7fd5fc7ff-vrxbd 1/1 Running 0 6m32s 10.44.0.4 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-pod1 1/1 Running 0 22m 10.44.0.1 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
[ec2-user#ip-172-31-18-31 ~]$ hostname
ip-172-31-18-31.ap-south-1.compute.internal
[ec2-user#ip-172-31-18-31 ~]$ curl http://10.44.0.4
Just simply create service for your pod to access it within the cluster, type of service should be ClusterIP.
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec.
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluste
Egg.:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
Remember to match selector of service to pod's selector.