Vault authentication using Kubernetes is failing - kubernetes

Created a Vault and Consul cluster on Kubernetes with TLS by following
https://testdriven.io/blog/running-vault-and-consul-on-kubernetes/
and was trying to configure Kubernetes auth method using https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s
everything went fine up to step 3 (Verify the Kubernetes auth method configuration), when I tested connection I am getting the error "Failed to connect to vault port 8200: Connection refused".
Can any one help me with this.
$ kubectl run --generator=run-pod/v1 tmp --rm -i --tty --serviceaccount=vault-auth --image alpine:3.7
# VAULT_ADDR=https://vault:8200
/ # curl -s $VAULT_ADDR/v1/sys/health | jq
/ # curl $VAULT_ADDR/v1/sys/health | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--
0curl: (7) Failed to connect to vault port 8200: Connection refused
$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ClusterIP None <none> 8500/TCP,8443/TCP,8400/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP 177m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 26h
vault ClusterIP 10.245.215.195 <none> 8200/TCP

Related

Not able to access Nginx from an external IP even after k8s nodeport service exposed

I am not able to access the nginx server using http://:30602 and also http://:30602
OS: Ubuntu 22
I also checked if any firewall is blocking it.
Using ufw
admin#tst-server:~$ sudo ufw status verbose
Status: inactive
Using netstat
admin#tst-server:~$ netstat -an | grep 22 | grep -i listen
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
unix 2 [ ACC ] STREAM LISTENING 354787 /run/containerd/s/9a866c6ea3a4fe1976aaed0884400cd59228d43776774cc3fad2d0b9a7c2ed7b
unix 2 [ ACC ] STREAM LISTENING 21722 /run/systemd/private
admin#tst-server:~$ netstat -an | grep 30602 | grep -i listen
Commands used for nginx deployment
Create Deployment
kubectl create deployment nginx --image=nginx
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 2/2 2 2 8d
nginx 1/1 1 1 9m50s
Create Service
kubectl create service nodeport nginx --tcp=80:80
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
nginx NodePort 10.109.112.116 <none> 80:30602/TCP 10m
Test it out
admin#tst-server:~$ hostname
tst-server.com
admin#tst-server:~$ curl tst-server.com:30602
curl: (7) Failed to connect to tst-server.com port 30602 after 10 ms: Connection refused
Got it working by getting the Node IP address for Minikube using following command
$ kubectl cluster-info
and then
curl http://<node_ip>:30008
Upon curl test-server.com:30602 why it redirects to tst-server.kanaaritech.com?
To check whether the node port is working or not you can check once with the node's IP with port 30602.

kubernetes - nginx ingress - How to access

I could not access my application from the k8s cluster.
With nodePort everything works. If I use ingress controller, I could see that it is created successfully. I am able to ping IP. If I try to telnet, it says connection refused. I am also unable to access the application. What do i miss? I do not see any exception in the ingress pod.
kubectl get ing -n test
NAME CLASS HOSTS ADDRESS PORTS AGE
web-ingress <none> * 192.168.0.102 80 44m
ping 192.168.0.102
PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data.
64 bytes from 192.168.0.102: icmp_seq=1 ttl=64 time=0.795 ms
64 bytes from 192.168.0.102: icmp_seq=2 ttl=64 time=0.860 ms
64 bytes from 192.168.0.102: icmp_seq=3 ttl=64 time=0.631 ms
^C
--- 192.168.0.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.631/0.762/0.860/0.096 ms
telnet 192.168.0.102 80
Trying 192.168.0.102...
telnet: Unable to connect to remote host: Connection refused
kubectl get all -n ingress-nginx
shows this
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-htvkh 0/1 Completed 0 99m
pod/ingress-nginx-admission-patch-cf8gj 0/1 Completed 0 99m
pod/ingress-nginx-controller-7fd7d8df56-kll4v 1/1 Running 0 99m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.102.220.87 <none> 80:31692/TCP,443:32736/TCP 99m
service/ingress-nginx-controller-admission ClusterIP 10.106.159.230 <none> 443/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 99m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7fd7d8df56 1 1 1 99m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 99m
job.batch/ingress-nginx-admission-patch 1/1 8s 99m
Answer
The IP from kubectl get ing -n test is not an externally accessible address that you should be using.
Your NGINX Ingress Controller Deployment has a Service deployed alongside it. You can use the external IP of this Service (if it has one) to hit your Ingress Controller.
Because your Service is of NodePort type (and does not show an external IP), you must address the Ingress Controller Pods through your cluster's Node IPs. You would need to track which Node the Pod is on, then find the Node's IP. Here is an example of doing this:
NODE=$(kubectl get pods -o wide | grep "ingress-nginx-controller" | awk {'print $8'})
NODE_IP=$(kubectl get nodes "$NODE" -o wide | grep Ready | awk {'print $7'})
More Info
If your cluster is managed (i.e. GKE/Azure/AWS), you can use a LoadBalancer Service to provide an external IP to hit the Ingress Controller.

Kubespray : Netchecker connectivity check fails

I deployed a Kubernetes (v1.17.5) cluster on OpenStack instances using Kubespray. Those instances are CentOS 7.6.1811 qcow2 images imported in Glance.
The install was successful, and I can see my nodes and pods with kubectl commands.
I used the deploy_netchecker option to deploy NetChecker and test the network within my cluster, and set network_plugin="flannel".
I also tried kube_proxy_mode="iptables", but it doesn't seem to affect the result.
That's pretty much all the changes I did in the k8s-cluster.yml file.
All the pods are running, services too :
[centos#cl1-master-0 ~]$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 46h
default netchecker-service NodePort 10.233.13.213 <none> 8081:31081/TCP 46h
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 46h
kube-system dashboard-metrics-scraper ClusterIP 10.233.59.12 <none> 8000/TCP 46h
kube-system kubernetes-dashboard ClusterIP 10.233.63.20 <none> 443/TCP 46h
But netchecker API gives the following answer :
[root#localhost ~]# curl http://X.X.X.X:31081/api/v1/connectivity_check
{"Message":"Connectivity check fails. Reason: there are absent or outdated pods; look up the payload","Absent":["netchecker-agent-hostnet-kk56x","netchecker-agent-hostnet-klldn","netchecker-agent-hostnet-r2vqs","netchecker-agent-hostnet-wqhjs"],"Outdated":["netchecker-agent-4jsgf","netchecker-agent-c9pcf","netchecker-agent-hostnet-jzbfv","netchecker-agent-vxgpf"]}
For an unknown reason, I cannot access the API from a cluster node with localhost, so I used a floating IP with OpenStack.
Here are some logs from the agent :
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-vjnwl_d8290268-3ea4-4e3c-acb4-295ab162a735/netchecker-agent/0.log
{"log":"I0701 13:04:01.814246 1 agent.go:135] Response status code: 200\n","stream":"stderr","time":"2020-07-01T13:04:01.81437579Z"}
{"log":"I0701 13:04:01.814272 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:04:01.814393199Z"}
{"log":"I0701 13:04:16.817398 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-vjnwl\n","stream":"stderr","time":"2020-07-01T13:04:16.817786735Z"}
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-hostnet-klldn_d5fa6e72-885f-44e1-97a6-880a25e6d6d6/netchecker-agent/0.log
{"log":"E0701 13:05:22.804428 1 agent.go:133] Error while sending info. Details: Post http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn: dial tcp 10.233.13.213:8081: i/o timeout\n","stream":"stderr","time":"2020-07-01T13:05:22.805138032Z"}
{"log":"I0701 13:05:22.804474 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:05:22.805190295Z"}
{"log":"I0701 13:05:37.807140 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn\n","stream":"stderr","time":"2020-07-01T13:05:37.807309111Z"}
Logs from the server do not indicate any error.
I tried to check DNS resolve with the following :
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- /bin/sh
/ $ nslookup kubernetes.default
Server: 169.254.25.10
Address 1: 169.254.25.10
nslookup: can't resolve 'kubernetes.default'
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
options ndots:5
169.254.25.10 is the IP of the nodelocaldns, but it doesn't seem to query the coredns service deployed.
When I use nslookup netchecker-service.default.svc.cluster.local 10.233.0.3, with the coredns IP, I get a correct answer.
What can be wrong with my configuration ?
Thanks in advance
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.

minikube dashabord unable to access it from outsude/internet

Here is the output of minikube dashbaord
ubuntu#ip-172-31-5-166:~$ minikube dashboard
* Enabling dashboard ...
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I have enabled port 45493 at Security Group Level and also on Linux VM. However,, when I'm trying to access the Kube dashboard, I don't have luck
wget http://13.211.44.210:45493/
--2020-04-16 05:50:52-- http://13.211.44.210:45493/
Connecting to 13.211.44.210:45493... failed: Connection refused.
However, when I do the below, it works and produces index.html file with status code 200
wget http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--2020-04-16 05:52:55-- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Connecting to 127.0.0.1:45493... connected.
HTTP request sent, awaiting response... 200 OK
Steps to reproduce at high level is as below:
EC2 Ubuntu of size t2.large
Install minikube, minikube start --driver=docker
Perform deployment as like kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-84bfdf55ff-xx8pl 1/1 Running 0 26m
kubernetes-dashboard-bc446cc64-7nl68 1/1 Running 0 26m
5.kubectl get svc -n kubernetes-dashboard
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.102.85.110 <none> 8000/TCP 40m
kubernetes-dashboard ClusterIP 10.99.75.241 <none> 80/TCP 40m
My question is why I'm unable to access the internet?
This is by design, minikube is a development tool for local environments.
You can deploy an ingress or loadbalancer service to expose the dashboard, if you really know what you are doing.

What happens when a service receives a request but has no ready pods?

Having a kubernetes service (of type ClusterIP) connected to a set of pods, but none of them are currently ready - what will happen to the request?
Will it:
fail eagerly
timeout
wait until a ready pod is available (or forever, whichever is earlier)
something else?
It will time out.
Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a service.
So, when you send a request within your network and there is no one to reply, your request will timeout.
Deployed nginx service
[node1 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 2h
my-nginx ClusterIP 10.100.1.134 80/TCP 9s
$ curl 10.100.1.134
curl: (7) Failed connect to 10.100.1.134:80; Connection refused
Deployed nginx deployment
$ kubectl create -f nginx-depl.yaml
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-f9945ffdd-2f77f 1/1 Running 0 1m
my-nginx-f9945ffdd-rk68v 1/1 Running 0 1m
$ curl 10.100.1.134
Welcome to nginx!
most likely you would get Connection refused error