k0s kubectl exec and kubectl port-forwarding are broken - kubernetes

I have a simple nginx pod and a k0s cluster setup with the k0s binary. Now i want to connect to that pod, but i get this error:
$ kubectl port-forward frontend-deployment-786ddcb47-p5kkv 7000:80
error: error upgrading connection: error dialing backend: rpc error: code = Unavailable
desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: connect: connection refused"
I dont understand why this happens and why it is tries to access /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock which does not exist on my maschine.
Do I have to add my local dev maschine with k0s to the cluster?
Extract from pod describe:
Containers:
frontend:
Container ID: containerd://897a8911cd31c6d58aef4b22da19dc8166cb7de713a7838bc1e486e497e9f1b2
Image: nginx:1.16
Image ID: docker.io/library/nginx#sha256:d20aa6d1cae56fd17cd458f4807e0de462caf2336f0b70b5eeb69fcaaf30dd9c
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 28 Jan 2021 14:20:58 +0100
Ready: True
Restart Count: 0
Environment: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m43s default-scheduler Successfully assigned remove-me/frontend-deployment-786ddcb47-p5kkv to k0s-worker-2
Normal Pulling 3m42s kubelet Pulling image "nginx:1.16"
Normal Pulled 3m33s kubelet Successfully pulled image "nginx:1.16" in 9.702313183s
Normal Created 3m32s kubelet Created container frontend
Normal Started 3m32s kubelet Started container frontend
deployment.yml and service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:1.16
ports:
- containerPort: 80
----
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80

Workaround is to just remove the file.
/var/lib/k0s/run/konnectivity-server/konnectivity-server.sock and restart the server.
Currenlty my github issue is still open.
https://github.com/k0sproject/k0s/issues/665

I had a similar issue where /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock was not getting created.(in my case path was /run/k0s/konnectivity-server/konnectivity-server.sock )
I changed my host configuration which finally fixed this issue.
Hostname: The hostnames in my nodes were in uppercase but k0s somehow expected it to be in lower case. We can override the hostname in the configuration file but that still did not fix the konnectivity sock issue so I had to reset all my hostnames on nodes to small case hostnames.
This change finally fixed my issue and on my first attempt, I saw the .sock file was getting created but still didn't have the permission. Then I followed the suggestion given above by TecBeast and that fixed the issue permanently.
This issue was then raised as an Issue in the K0s GitHub repository, and fixed in a Pull Request.

Related

Unable to connect to my Docker container running inside a single-node Kubernetes cluster

Kubernetes newbie here.
First, let me tell you about the functionality of my Node.js sample application. It is a simple web server that responds with the text "Hello from Node" in response to a GET request to the root (/) route. Also, when the server starts, it outputs the text "Server listening on port 8000".
Currently, the app is running inside a container on a single-node Kubernetes cluster. (I am using Minikube)
When I run the command kubectl logs web-server, I get the desired response. web-server is the name of the running pod.
But when I try to connect to the application using the command curl 192.168.59.100:31515, I get the response: Connection refused. I should see the response: "Hello from node" instead.
Please see the picture below.
Please note that in the picture above, k & m are aliases for kubectl & minikube respectively.
My YAML files are as follows:
node-server-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
web: server
spec:
containers:
- name: web-server-container
image: sundaray/node-server:v1
ports:
- containerPort: 3000
node-server-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-server-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
selector:
web: server
What am I doing wrong?

EKS, Windows node. networkPlugin cni failed

Per https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html, I ran the command, eksctl utils install-vpc-controllers --cluster <cluster_name> --approve
My EKS version is v1.16.3. I tries to deploy Windows docker images to a windows node. I got error below.
Warning FailedCreatePodSandBox 31s kubelet, ip-west-2.compute.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab8001f7b01f5c154867b7e" network for pod "mrestapi-67fb477548-v4njs": networkPlugin cni failed to set up pod "mrestapi-67fb477548-v4njs_ui" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address
$ kubectl logs vpc-resource-controller-645d6696bc-s5rhk -n kube-system
I1010 03:40:29.041761 1 leaderelection.go:185] attempting to acquire leader lease kube-system/vpc-resource-controller...
I1010 03:40:46.453557 1 leaderelection.go:194] successfully acquired lease kube-system/vpc-resource-controller
W1010 23:57:53.972158 1 reflector.go:341] pkg/mod/k8s.io/client-go#v0.0.0-20180910083459-2cefa64ff137/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 1480444 (1515040)
It complains too old resource version. How do I upgrade the version?
I removed the windows nodes, re-created windows nodes with different instance type. But, it did not work.
Removed windows nodes group, re-created windows nodes group. It did not work.
Finally, I removed entire EKS cluster, re-created eks cluster. The command, kubectl describe node <windows_node> gives me the output below.
vpc.amazonaws.com/CIDRBlock 0 0
vpc.amazonaws.com/ENI 0 0
vpc.amazonaws.com/PrivateIPv4Address 1 1
Deployed windows-server-iis.yaml. It works as expected. The root cause of the problem is mystery.
To troubleshoot this I would...
First list the components to make sure they're running:
$kubectl get pod -n kube-system | grep vpc
vpc-admission-webhook-deployment-7f67d7b49-wgzbg 1/1 Running 0 38h
vpc-resource-controller-595bfc9d98-4mb2g 1/1 Running 0 29
If they are running check their logs
kubectl logs <vpc-yadayada> -n kube-system
Make sure the instance type you are using has enough available IPs per ENI because in the Windows world only one ENI is used and is limited to the max available IP's per ENI minus one for the Primary IP address. I have run into this error before where I have exceeded the number of IP's available to my ENI.
Confirm that the selector of your pod is right
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
As an anecdote, I have done the steps mentioned under the To enable Windows support for your cluster with a macOS or Linux client section of the doc you linked on a few clusters to date, and they have worked well.
What is your output for
kubectl describe node <windows_node>
?
if it's like :
vpc.amazonaws.com/CIDRBlock: 0
vpc.amazonaws.com/ENI: 0
vpc.amazonaws.com/PrivateIPv4Address: 0
then you need to re-create the nodegroup with different instance type...
then try to deploy this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: windows-server-iis-test
namespace: default
spec:
selector:
matchLabels:
app: windows-server-iis-test
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: windows-server-iis-test
tier: backend
track: stable
spec:
containers:
- name: windows-server-iis-test
image: mcr.microsoft.com/windows/servercore:1809
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
resources:
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 128m
memory: 100Mi
nodeSelector:
kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: windows-server-iis-test
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: windows-server-iis-test
tier: backend
track: stable
sessionAffinity: None
type: ClusterIP
kubectl proxy
open browser http://localhost:8001/api/v1/namespaces/default/services/http:windows-server-iis-test:80/proxy/default.html will shown webpage with Hello EKS text

readiness probe fails with connection refused

I am trying to setup K8S to work with two Windows Nodes (2019). Everything seems to be working well and the containers are working and accessible using k8s service. But, once I introduce configuration for readiness (or liveness) probes - all fails. The exact error is:
Readiness probe failed: Get http://10.244.1.28:80/test.txt: dial tcp 10.244.1.28:80: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
When I try the url from k8s master, it works well and I get 200. However I read that the kubelet is the one executing the probe and indeed when trying from the Windows Node - it cannot be reached (which seems weird because the container is running on that same node). Therefore I assume that the problem is related to some network configuration.
I have a HyperV with External network Virtual Switch configured. K8S is configured to use flannel overlay (vxlan) as instructed here: https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies.
Any idea how to troubleshoot and fix this?
UPDATE: providing the yaml:
apiVersion: v1
kind: Service
metadata:
name: dummywebapplication
labels:
app: dummywebapplication
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: dummywebapplication
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dummywebapplication
name: dummywebapplication
spec:
replicas: 2
template:
metadata:
labels:
app: dummywebapplication
name: dummywebapplication
spec:
containers:
- name: dummywebapplication
image: <my image>
readinessProbe:
httpGet:
path: /test.txt
port: 80
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 60
nodeSelector:
beta.kubernetes.io/os: windows
And one more update. In this doc (https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/) it is written:
My Windows node cannot access NodePort service
Local NodePort access from the node itself fails. This is a known
limitation. NodePort access works from other nodes or external
clients.
I don't know if this is related or not as I could not connect to the container from a different node as stated above. I also tried a service of LoadBalancer type but it didn't provide a different result.
The network configuration assumption was correct. It seems that for 'overlay', by default, the kubelet on the node cannot reach the IP of the container. So it keeps returning timeouts and connection refused messages.
Possible workarounds:
Insert an 'exception' into the ExceptionList 'OutBoundNAT' of C:\k\cni\config on the nodes. This is somewhat tricky if you start the node with start.ps1 because it overwrites this file everytime. I had to tweak 'Update-CNIConfig' function in c:\k\helper.psm1 to re-insert the exception similar to the 'l2bridge' in that file.
Use 'l2bridge' configuration. Seems like 'overlay' is running in a more secured isolation, but l2bridge is not.

Cannot connect to a Mongodb pod in Kubernetes (Connection refused)

I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work.
These are the steps I took:
I started a Kubernetes pod running Mongodb on a remote virtual machine.
Then I exposed it through a Kubernetes NodePort service.
Then I tried to connect to the Mongodb instance from my laptop, but it
didn't work.
Here is the command I used to try to connect:
$ mongo host:NodePort
(by "host" I mean the Kubernetes master).
And here is its output:
MongoDB shell version v4.0.3
connecting to: mongodb://host:NodePort/test
2018-10-24T21:43:41.462+0200 E QUERY [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException:
Error connecting to host:NodePort :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:257:13
#(connect):1:6
exception: connect failed
From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:
db._adminCommand( {getCmdLineOpts: 1}
And here is the output:
{
"argv" : [
"mongod",
"--bind_ip",
"0.0.0.0"
],
"parsed" : {
"net" : {
"bindIp" : "0.0.0.0"
}
},
"ok" : 1
}
So the Mongodb server should actually be accessible remotely.
I can't figure out whether the problem is caused by Kubernetes or by Mongodb.
As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.
Could someone help me shed some light on this? Or tell me how to debug this problem?
EDIT:
Here is the output of the kubectl describe deployment <mongo-deployment> command, as per your request:
Name: mongo-remote
Namespace: default
CreationTimestamp: Thu, 25 Oct 2018 06:31:24 +0000
Labels: name=mongo-remote
Annotations: deployment.kubernetes.io/revision=1
Selector: name=mongo-remote
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: name=mongo-remote
Containers:
mongocontainer:
Image: mongo:4.0.3
Port: 5000/TCP
Host Port: 0/TCP
Command:
mongod
--bind_ip
0.0.0.0
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mongo-remote-655478448b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mongo-remote-655478448b to 1
For the sake of completeness, here is the yaml description of the deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-remote
spec:
replicas: 1
template:
metadata:
labels:
name: mongo-remote
spec:
containers:
- name: mongocontainer
image: mongo:4.0.3
imagePullPolicy: Always
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 5000
name: mongocontainer
nodeSelector:
kubernetes.io/hostname: xxx
I found the mistake (and as I suspected, it was a silly one).
The problem was in the yaml description of the deployment. As no port was specified in the mongod command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000).
So the solution is to either set the containerPort as the default port of mongodb, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
name: mongocontainer
Or to set the port of mongodb as the one of the containerPort, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--port"
- "5000"
ports:
- containerPort: 5000
name: mongocontainer

What is the meaning of this kubernetes UI error message?

I am running 3 ubuntu server VMs on my local machine and trying to manage with kubernetes.
The UI does not start by itself when using the start script, so I tried to start up the UI manually using:
kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
The first command succeeds then I get the following for the second command:
error validating "addons/kube-ui/kube-ui-svc.yaml": error validating
data: [field nodePort: is required, field port: is required]; if you
choose to ignore these errors, turn validation off with
--validate=false
So I try editing the default kube-ui-scv file by adding nodePort to the config:
apiVersion: v1
kind: Service
metadata:
name: kube-ui
namespace: kube-system
labels:
k8s-app: kube-ui
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeUI"
spec:
selector:
k8s-app: kube-ui
ports:
- port: 80
targetPort: 8080
nodePort: 30555
But then I get another error after the edit or adding in nodePort:
The Service "kube-ui" is invalid. spec.ports[0].nodePort: invalid
value '30555': cannot specify a node port with services of type
ClusterIP
I cannot get the ui running at my master nodes IP. kubectl get nodes returns correct information. Thanks.
I believe you're running into https://github.com/kubernetes/kubernetes/issues/8901 with the first error, can you set it to 0? Setting NodePort with a service.Type=ClusterIP doesn't make sense, so the second error is legit.