Kubernetes ACS engine: containers (pods) do not have internet access - powershell

I'm using a Kubernetes cluster, deployed on Azure using the ACS-engine. My cluster is composed of 5 nodes.
1 master (unix VM) (v1.6.2)
2 unix agent (v1.6.2)
2 windows agent (v1.6.0-alpha.1.2959+451473d43a2072)
I have created a unix pod defined by the following YAML:
Name: ping-with-unix
Node: k8s-linuxpool1-25103419-0/10.240.0.5
Start Time: Fri, 30 Jun 2017 14:27:28 +0200
Status: Running
IP: 10.244.2.6
Controllers: <none>
Containers:
ping-with-unix-2:
Container ID:
Image: willfarrell/ping
Port:
State: Running
Started: Fri, 30 Jun 2017 14:27:29 +0200
Ready: True
Restart Count: 0
Environment:
HOSTNAME: google.com
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1nrh5 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-1nrh5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1nrh5
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: <none>
Events: <none>
This pod does not have internet access.
2017-06-30T12:27:29.512885000Z ping google.com every 300 sec
2017-06-30T12:27:29.521968000Z PING google.com (172.217.17.78): 56 data bytes
2017-06-30T12:27:39.698081000Z --- google.com ping statistics ---
2017-06-30T12:27:39.698305000Z 1 packets transmitted, 0 packets received, 100% packet loss
I created a 2nd pod, targeting windows container, with a custom docker image. This image instantiates an HttpClient and request an endpoint. It also does not have internet access. I can access the container to run interactive PowerShell. I cannot not access any DNS (due to lack of internet access).
PS C:\app> ping github.com
Ping request could not find host github.com. Please check the name and try again.
PS C:\app> ping 192.30.253.112
Pinging 192.30.253.112 with 32 bytes of data:
Request timed out.
Request timed out.
Ping statistics for 192.30.253.112:
Packets: Sent = 2, Received = 0, Lost = 2 (100% loss),
Control-C
What do I have to configure to allow my containers to access internet?
Remarks: I have not defined any Network policy.

I've updated my cluster using the api version '2017-07-01' and the kubernetes version '1.6.6'. Both my unix and windows pods have Internet access.
Note, for Windows pods :
Internet is available 2 or 3 minutes after the pod starts
I can't set the DnsPolicy to "Default", only "ClusterFirst" or "ClusterFirstWithHostNet" works.

Related

HashiCorp Vault Enable REST Call

I am following the HashiCorp tutorial and it all looks fine until I try to launch the "webapp" pod - a simple pod whose only function is to demonstrate that it can start and mount a secret volume.
The error (permission denied on a REST call) is shown at the bottom of this command output:
kubectl describe pod webapp
Name: webapp
Namespace: default
Priority: 0
Service Account: webapp-sa
Node: docker-desktop/192.168.65.4
Start Time: Tue, 14 Feb 2023 09:32:07 -0500
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
webapp:
Container ID:
Image: jweissig/app:0.0.1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt/secrets-store from secrets-store-inline (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b76r (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
secrets-store-inline:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: secrets-store.csi.k8s.io
FSType:
ReadOnly: true
VolumeAttributes: secretProviderClass=vault-database
kube-api-access-5b76r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42m default-scheduler Successfully assigned default/webapp to docker-desktop
Warning FailedMount 20m (x8 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline kube-api-access-5b76r]: timed out waiting for the condition
Warning FailedMount 12m (x23 over 42m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/webapp, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "db-password": Error making API request.
URL: GET http://vault.default:8200/v1/secret/data/db-pass
Code: 403. Errors:
* 1 error occurred:
* permission denied
Warning FailedMount 2m19s (x4 over 38m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[kube-api-access-5b76r secrets-store-inline]: timed out waiting for the condition
So it seems that this REST call fails: GET http://vault.default:8200/v1/secret/data/db-pass. Indeed, it fails from curl as well:
curl -vik -H "X-Vault-Token: root" http://localhost:8200/v1/secret/data/db-pass
* Trying 127.0.0.1:8200...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8200 failed: Connection refused
* Failed to connect to localhost port 8200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8200: Connection refused
At this point I am a bit lost. I am not sure that the REST call is configured correctly, i.e. in such a way that Vault will accept it; but I am also not sure how to configure it differently.
The Vault logs show the information below, so I seems that the port and token I use are correct:
2023-02-14 09:07:14 You may need to set the following environment variables:
2023-02-14 09:07:14 $ export VAULT_ADDR='http://[::]:8200'
2023-02-14 09:07:14 The root token is displayed below
2023-02-14 09:07:14 Root Token: root
Vault seems to be running fine in Kubernetes:
kubectl get pods
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 1 (22m ago) 32m
vault-agent-injector-77fd4cb69f-mf66p 1/1 Running 1 (22m ago) 32m
If I try to show the Vault status:
vault status
Error checking seal status: Get "http://[::]:8200/v1/sys/seal-status": dial tcp [::]:8200: connect: connection refused
I don't think the Vault is sealed, but if I try to unseal it:
vault operator unseal
Unseal Key (will be hidden):
Error unsealing: Put "http://[::]:8200/v1/sys/unseal": dial tcp [::]:8200: connect: connection refused
Any ideas?
As pertains to the tutorial, it works. Not sure what I was doing wrong, but I ran it all again and it worked. If I had to guess, I would suspect that some of the YAML involved in configuring the pods got malformed (since white space is significant).
The vault status command works, but only from a terminal running inside the Vault pod. The Kubernetes-in-Docker-on-DockerDesktop cluster does not expose any ports for these pods, so even though I have vault-cli installed on my PC, I cannot use vault status from outside the pods.

Kubernetes api/dashboard issue

I posted this on serverfault, too, but will hopefully get more views/feedback here:
Trying to get the Dashboard UI working in a kubeadm cluster using kubectl proxy for remote access. Getting
Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
Trying to reach: 'https://192.168.2.3:8443/'
when accessing http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ via remote browser.
Looking at API logs, I see that I'm getting the following errors:
I1215 20:18:46.601151 1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
I1215 20:19:15.444580 1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
I1215 20:19:31.850501 1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
I1215 20:55:55.574729 1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
E1215 21:19:47.246642 1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})
I presume this is related to not being able to get the Dashboard working, and if so am wondering what the issue with the API server is. Everything else in the cluster appears to be working.
NB, I have admin.conf running locally and am able to access the cluster via kubectl with no issue.
Also, of note is that this had been working when I first got the cluster up. However, I was having networking issues, and had to apply this in order to get CoreDNS to work Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns, so I am wondering if this maybe broke the proxy service?
* EDIT *
Here is output for the dashboard pod:
[gms#thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
Name: kubernetes-dashboard-77fd78f978-tjzxt
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: thalia2.hostdoman/hostip<redacted>
Start Time: Sat, 15 Dec 2018 15:17:57 -0600
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: cni.projectcalico.org/podIP: 192.168.2.3/32
Status: Running
IP: 192.168.2.3
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Container ID: docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64#sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Sat, 15 Dec 2018 15:18:04 -0600
Ready: True
Restart Count: 0
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-mrd9k:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-mrd9k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
I checked the service:
[gms#thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.103.93.93 <none> 443/TCP 4d23h
And also of note, if I curl http://localhost:8001/api from the master node, I do get a valid response.
So, in summary, I'm not sure which if any of these errors are the source of not being able to access the dashboard.
I just upgraded my cluster to 1.13.1, in hopes that this issue would be resolved, but alas, no.
When you do kubectl proxy , the default port 8001 only reachable from the localhost. If you ssh to the machine which the kubernetes is installed, you must map this port to your laptop or any device used to ssh.
You can ssh to master node and map the 8001 port to your localbox by :
ssh -L 8001:localhost:8001 hostname#master_node_IP
I upgraded all nodes in the cluster to version 1.13.1 and voila, the dashboard now works AND so far I have not had to apply the CoreDNS fix noted above.

Kubernetes Port Forwarding - Connection refused

I am getting the following error when forwarding port. Can anyone help?
mjafary$ sudo kubectl port-forward sa-frontend 88:82
Forwarding from 127.0.0.1:88 -> 82
Forwarding from [::1]:88 -> 82
The error log :
Handling connection for 88
Handling connection for 88
E1214 01:25:48.704335 51463 portforward.go:331] an error occurred forwarding 88 -> 82: error forwarding port 82 to pod a017a46573bbc065902b600f0767d3b366c5dcfe6782c3c31d2652b4c2b76941, uid : exit status 1: 2018/12/14 08:25:48 socat[19382] E connect(5, AF=2 127.0.0.1:82, 16): Connection refused
Here is the description of the pod. My expectation is that when i hit localhost:88 in the browser the request should forward to the jafary/sentiment-analysis-frontend container and the application page should load
mjafary$ kubectl describe pods sa-frontend
Name: sa-frontend
Namespace: default
Node: minikube/192.168.64.2
Start Time: Fri, 14 Dec 2018 00:51:28 -0700
Labels: app=sa-frontend
Annotations: <none>
Status: Running
IP: 172.17.0.23
Containers:
sa-frontend:
Container ID: docker://a87e614545e617be104061e88493b337d71d07109b0244b2b40002b2f5230967
Image: jafary/sentiment-analysis-frontend
Image ID: docker-pullable://jafary/sentiment-analysis-frontend#sha256:5ac784b51eb5507e88d8e2c11e5e064060871464e2c6d467c5b61692577aeeb1
Port: 82/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 14 Dec 2018 00:51:30 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mc5cn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-mc5cn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mc5cn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
The reason the connection is refused is that there is no process listening on port 82. The dockerfile used to create the nginx image exposes port 80, and in your pod spec you have also exposed port 82. However, nginx is configured to listen on port 80.
What this means is your pod has two ports that have been exposed: 80 and 82. The nginx application, however, is actively listening on port 80 so only requests to port 80 work.
To make your setup work using port 82, you need to change the nginx config file so that it listens on port 82 instead of 80. You can either do this by creating your own docker image with the changes built into your image, or you can use a configMap to replace the default config file with the settings you want
As #Patric W said, the connection is refused because there is no process listening on port 82. That port hasn't been exposed.
Now, to get the port on which your pod is listening to, you can run the commands
NB: Be sure to change any value in <> with real values.
First, get the name of the pods in the specified namespace kubectl get po -n <namespace>
Now check the exposed port of the pod you'll like to forward.
kubectl get pod <pod-name> -n <namespace> --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Now use the resulting exposed port above to run port-forward with the command
kubectl port-forward pod/<pod-name> <local-port>:<exposed-port>
where local-port is the port from which the container will be accessed from the browser ..localhost:<local-port> while the exposed-port is the port on which the container listens to. Usually defined with the EXPOSE command in the Dockerfile
Get more information here
As Patrick pointed out correctly. I had this same issue which plagued me for 2 days. So steps would be:
Ensure your Dockerfile is using your preferred port (EXPOSE 5000)
In your pod.yml file ensure containerPort is 5000 (containerPort: 5000)
Apply the kubectl command to reflect the above:
kubectl port-forward pod/my-name-of-pod 8080:5000

kubernetes: Service endpoints available from within cluster, not outside

I have a service (LoadBalancer) definition in a k8s cluster, that is exposing 80 and 443 ports.
In the k8s dashboard, it indicates that these are the external endpoints:
(the k8s has been deployed using rancher for what that matters)
<some_rancher_agent_public_ip>:80
<some_rancher_agent_public_ip>:443
Here comes the weird (?) part:
From a busybox pod spawned within the cluster:
wget <some_rancher_agent_public_ip>:80
wget <some_rancher_agent_public_ip>:443
both succeed (i.e they fetch the index.html file)
From outside the cluster:
Connecting to <some_rancher_agent_public_ip>:80... connected.
HTTP request sent, awaiting response...
2018-01-05 17:42:51 ERROR 502: Bad Gateway.
I am assuming this is not a security groups issue given that:
it does connect to <some_rancher_agent_public_ip>:80
I have also tested this by allowing all traffic from all sources in the sg the instance with <some_rancher_agent_public_ip> belongs to
In addition, nmap-ing the above public ip, shows 80 and 443 in open state.
Any suggestions?
update:
$ kubectl describe svc ui
Name: ui
Namespace: default
Labels: <none>
Annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:eu-west-1:somecertid
Selector: els-pod=ui
Type: LoadBalancer
IP: 10.43.74.106
LoadBalancer Ingress: <some_rancher_agent_public_ip>, <some_rancher_agent_public_ip>
Port: http 80/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: http 30854/TCP
Endpoints: 10.42.179.14:80
Port: https 443/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: https 31404/TCP
Endpoints: 10.42.179.14:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and here is the respective pod description:
kubectl describe pod <the_pod_id>
Name: <pod_id>
Namespace: default
Node: ran-agnt-02/<some_rancher_agent_public_ip>
Start Time: Fri, 29 Dec 2017 16:48:42 +0200
Labels: els-pod=ui
pod-template-hash=375086521
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ui-deployment-7c94db965","uid":"5cea65ea-eca7-11e7-b8e0-0203f78b...
Status: Running
IP: 10.42.179.14
Created By: ReplicaSet/ui-deployment-7c94db965
Controlled By: ReplicaSet/ui-deployment-7c94db965
Containers:
ui:
Container ID: docker://some-container-id
Image: docker-registry/imagename
Image ID: docker-pullable://docker-registry/imagename#sha256:some-sha
Port: 80/TCP
State: Running
Started: Fri, 05 Jan 2018 16:24:56 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 05 Jan 2018 16:23:21 +0200
Finished: Fri, 05 Jan 2018 16:23:31 +0200
Ready: True
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8g7bv (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-8g7bv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8g7bv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Kubernetes provides different ways of exposing pods to outside the cluster, mainly Services and Ingress. I'll focus on Servicessince you are having issues with that.
There are different Services types, among those:
ClusterIP: default type. Choosing this type means that your service gets an stable IP which is reachable only from inside of the cluster. Not relevant here.
NodePort: Besides having a cluster-internal IP, expose the service on a random port on each node of the cluster (the same port on each node). You’ll be able to contact the service on any NodeIP:NodePort address. That's why you can contact your rancher_agent_public_ip:NodePort from outside the cluster.
LoadBalancer: Besides having a cluster-internal IP and exposing service on a NodePort, ask the cloud provider for a load balancer that exposes the service externally using a cloud provider’s load balancer.
Creating a Service of type LoadBalancer makes it NodePort as well. That's why you can reach rancher_agent_public_ip:30854.
I have no experience on rancher, but it seems that creating a LoadBalancer Service deploys a HAProxy to act as a Load balancer. That HAProxy that was created by Rancher needs a public IP thats reachable from outside the cluster, and a port that will redirect requests to the NodePort.
But in your service, the IP looks like an internal IP 10.43.74.106. That IP won't be reachable from outside the cluster. You need a public IP.

Kubernetes minikube, cannot expose service on public ip range

So I've been playing around with Minkube.
I've managed to deploy a simple python flask container:
PS C:\Users\Will> kubectl run test-flask-deploy --image
192.168.1.201:5000/test_flask:1
deployment "test-flask-deploy" created
I've also then managed to expose the deployment as a service:
PS C:\Users\Will> kubectl expose deployment/test-flask-deploy --
type="NodePort" --port 8080
service "test-flask-deploy" exposed
In the dashboard I can see that the service has a Cluster IP:
10.0.0.132.
I access the dashboard on a 192.168.xxx.xxx address, so I'm hoping I can expose the service on that external IP.
Any idea how I go about this?
A separate and slightly less important question: I've got minikube talking to a docker registry on my network. If i deploy an image (which has not yet been pulled local to the minikube) the deployment fails, yet when I run the docker pull command on minikube locally, the deployment then succeeds. So minikube is able to pull docker images, but when I deploy an image which is accessible via the registry, yet not pulled locally, it fails. Any thoughts?
EDIT: More detail in response to comment:
PS C:\Users\Will> kubectl describe pod test-flask-deploy
Name: test-flask-deploy-1049547027-rgf7d
Namespace: default
Node: minikube/192.168.99.100
Start Time: Sat, 07 Oct 2017 10:19:58 +0100
Labels: pod-template-hash=1049547027
run=test-flask-deploy
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"test-flask-deploy-1049547027","uid":"b06a14b8-ab40-11e7-9714-080...
Status: Running
IP: 172.17.0.4
Created By: ReplicaSet/test-flask-deploy-1049547027
Controlled By: ReplicaSet/test-flask-deploy-1049547027
Containers:
test-flask-deploy:
Container ID: docker://577e339ce680bc5dd9388293f1f1ea62be59a6acc25be22889310761222c760f
Image: 192.168.1.201:5000/test_flask:1
Image ID: docker-pullable://192.168.1.201:5000/test_flask#sha256:d303ed635888394f69223cc0a66c5778444fd3636dfcde42295fd512be948898
Port: <none>
State: Running
Started: Sat, 07 Oct 2017 10:19:59 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5rrpm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-5rrpm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5rrpm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
First, check the nodeport that is assigned to your service:
$ kubectl get svc test-flask-deploy
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-flask-deploy 10.0.0.76 <nodes> 8080:30341/TCP 4m
Now you should be able to access it on 192.168.xxxx:30341 or whatever your minikubeIP:nodeport is.