Rancher Desktop port forwarding not working - kubernetes

I created a service exposing port 8000/TCP and trying to get that port forwarded to my Mac.
$ kubectl describe svc ddb
Name: ddb
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=local-ddb
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.215.24
IPs: 10.43.215.24
Port: <unset> 8000/TCP
TargetPort: ddb-http-port/TCP
Endpoints: 10.42.0.91:8000
Session Affinity: None
Events: <none>
Port seems indeed open on my Mac (NOTE I found the local port number by looking at the Rancher Desktop GUI)
lsof | grep 56107
Rancher 50831 victorbarbu 84u IPv4 0x8f04a06a4fb87541 0t0 TCP localhost:56107 (LISTEN)
Doing curl http://localhost:56107 just hangs, however running:
$ kubectl run bb -i --tty --image alpine
# curl http://ddb:8000
{"__type":"com.[TRUNCATED]
Works as expected. Why could that be? How do I fix it?

Related

how to run the external-IP in a K8s namespace (installing JupyterHub)

I am following the instructions here:
https://z2jh.jupyter.org/en/stable/jupyterhub/installation.html
to install locally with kubernetes and minikube a jupyter hub.
Its almost done as can be seen in the pic
the namespace is called k8s-namespace-jose
I had to run the command:
kubectl --namespace get service proxy-public --output jsonpath='{.status.loadBalancer.ingress[].ip}'
In order to get the EXTERNAL-IP shown above.
The thing is that going to:
http://104.196.41.97 does not work (server not responding)
nor the folowing works:
http://104.196.41.97:80
the error that I get is:
What can I do in order to get my jupyterhub in my local server?
EDIT:
In order to have all the info about the loadbalancer:
Name: proxy-public
Namespace: k8s-namespace-jose
Labels: app=jupyterhub
app.kubernetes.io/managed-by=Helm
chart=jupyterhub-2.0.0
component=proxy-public
heritage=Helm
release=helm-release-name-jose
Annotations: meta.helm.sh/release-name: helm-release-name-jose
meta.helm.sh/release-namespace: k8s-namespace-jose
Selector: component=proxy,release=helm-release-name-jose
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.131.233
IPs: 10.103.131.233
External IPs: 104.196.41.97
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32297/TCP
Endpoints: 10.244.0.13:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Due to the minikube installation, probably the address is the address of minikube cluster and not the External-IP
Can you try an alternative to hit the minikube IP instead of the External IP?
sgrigori#sgrigori-vbox:~/Dev/jupyterhub$ minikube ip
192.168.49.2
and use your node port 32297
http://192.168.49.2:32297

Minio Deployment in Kubernetes : Console getting redirected

I made a Minio deployment in my 2 Node Kubernetes cluster using YAML files.
I had used an NFS server for the corresponding persistent volume and pvc associated with the same.
Once the pod is running, I created a service to access the console from the browser.
But when tried the URL "http://<host-ip-address:nodePort>", the same was getting redirected to the port 45893 with the message "This site cannot be reached."
Regards,
Vivek
After many tries, got a solution with the help of my friend.
We created a copy of the service and changed the Port to the port to which my Minio console was getting redirected and Nodeport to some random port allowed in the firewall. This resolved the issue.
service.yaml
type: LoadBalancer
ports:
- port: 9000
nodePort: 32767
protocol: TCP
selector:
service_copy.yaml
ports:
- port: 45893
nodePort: 32766
protocol: TCP
selector:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP X.X.X.X <none> 443/TCP 25d
minio-xxx-service NodePort X.X.X.X <none> 9000:32767/TCP 3d23h
minio-xxxx-service-cp NodePort X.X.X.X <none> 45893:32766/TCP 146m
After doing the same, I was able to access the console.
Regards,
Vivek

Why Kubernetes services can not be resolved?

In my namespace I have services
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blue-service NodePort 10.107.127.118 <none> 80:32092/TCP 60m
demo ClusterIP 10.111.134.22 <none> 80/TCP 3d
I added blue-service to /etc/hosts
It failes again
wget -O- blue-service
--2022-06-13 11:11:32-- http://blue-service/
Resolving blue-service (blue-service)... 10.107.127.118
Connecting to blue-service (blue-service)|10.107.127.118|:80... failed: Connection timed out.
Retrying.
I decided to chech with describe
Name: blue-service
Namespace: default
Labels: app=blue
Annotations: <none>
Selector: app=blue
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.127.118
IPs: 10.107.127.118
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32092/TCP
Endpoints: 172.17.0.39:8080,172.17.0.40:8080,172.17.0.41:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Why?
The services you are referring to do not have an external IP (the External IP field is empty) so you cannot access those services.
If you want to access those services, you either need to
Make them a LoadBalancer service type which will give them an external IP
or
Use kubectl port-forward to connect a local port on your machine to the service then use localhost:xxxx to access the service
If you want to map a DNS name to the service, you should look at the External DNS project as mentioned in this answer which will allow you to create DNS entries in your provider's DNS service (if you are running the cluster on a managed platform)
OR, use nip.io if you're only testing

Why can I not use port 80 when using K3s Kubernetes

I have a simple NodeJS project running on a K3s cluster on a Raspberry Pi 4. The cluster has a service to expose it. The code is as follows...
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
I want to try and use port 80 instead of 3000 so I try...
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
But it can't use the port.
Warning FailedScheduling 5m 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Why am I having issues?
Update
Per the answer I tried...
pi#raspberrypi:~ $ sudo netstat -tulpn | grep :80
pi#raspberrypi:~ $ sudo ss -tulpn | grep :80
pi#raspberrypi:~ $
My guess is this is a K3s or Pi limitation.
Update 2
When I run kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 24d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 24d
kube-system metrics-server ClusterIP 10.43.48.200 <none> 443/TCP 24d
kube-system traefik-prometheus ClusterIP 10.43.89.96 <none> 9100/TCP 24d
kube-system traefik LoadBalancer 10.43.65.154 192.168.x.xxx 80:31065/TCP,443:32574/TCP 24d
test-namespace app-tier LoadBalancer 10.43.190.179 192.168.x.xxx 3000:31500/TCP 4d
k3s comes with a pre-installed traefik ingress controller which binds to 80, 443 and 8080 on the host, alhtough you should have seen that with ss or netstat
You should see this service if you run:
kubectl get service --all-namespaces
Although you should have seen it with netstat or ss if something is using the port if this is the case. But mb this service also failed to deploy but somehow blocks k3s from taking the port.
Another thing I can think of: Are you running the experimental rootless setup?

Kubernetes ExternalName service not visible in DNS

I'm trying to expose a single database instance as a service in two Kubernetes namespaces. Kubernetes version 1.11.3 running on Ubuntu 16.04.1. The database service is visible and working in the default namespace. I created an ExternalName service in a non-default namespace referencing the fully qualified domain name in the default namespace as follows:
kind: Service
apiVersion: v1
metadata:
name: ws-mysql
namespace: wittlesouth
spec:
type: ExternalName
externalName: mysql.default.svc.cluster.local
ports:
- port: 3306
The service is running:
eric$ kubectl describe service ws-mysql --namespace=wittlesouth
Name: ws-mysql
Namespace: wittlesouth
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: mysql.default.svc.cluster.local
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
If I check whether the service can be found by name from a pod running in the wittlesouth namespace, this service name does not resolve, but other services in that namespace (i.e. Jira) do:
root#rs-ws-diags-8mgqq:/# nslookup mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.99.120.208
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth.svc.cluster.local: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup jira.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: jira.wittlesouth.svc.cluster.local
Address: 10.105.30.239
Any thoughts on what might be the issue here? For the moment I've worked around it by updating applications that need to use the database to reference the fully qualified domain name of the service running in the default namespace, but I'd prefer to avoid that. My intent eventually is to have the namespaces have separate database instances, and would like to deploy apps configured to work that way now in advance of actually standing up the second instance.
This doesn't work for me with Kubernetes 1.11.2 with coredns and calico. It works only if you reference the external service directly in whichever namespace it runs:
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 0 17m
mysql-1 2/2 Running 0 16m
$ kubectl get pods -n wittlesouth
NAME READY STATUS RESTARTS AGE
ricos-dummy-pod 1/1 Running 0 14s
kubectl exec -it ricos-dummy-pod -n wittlesouth bash
root#ricos-dummy-pod:/# ping mysql.default.svc.cluster.local
PING mysql.default.svc.cluster.local (192.168.1.40): 56 data bytes
64 bytes from 192.168.1.40: icmp_seq=0 ttl=62 time=0.578 ms
64 bytes from 192.168.1.40: icmp_seq=1 ttl=62 time=0.632 ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=62 time=0.628 ms
^C--- mysql.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.578/0.613/0.632/0.025 ms
root#ricos-dummy-pod:/# ping ws-mysql
ping: unknown host
root#ricos-dummy-pod:/# exit
$ kubectl get svc mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 45d
$ kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.1.40:3306,192.168.2.25:3306
Session Affinity: None
Events: <none>
The ExternalName service feature is only supported using kube-dns as per the docs and Kubernetes 1.11.x defaults to coredns. You might want to try changing from coredns to kube-dns or possibly changing the configs for your coredns deployment. I expect this to available at some point using coredns.