How to access a server on kubernetes from your local machine? - kubernetes

My company has a kubernetes platform on which I have a pod ingress running on a namespace my_namespace:
apiVersion: v1
kind: Pod
metadata:
name: ingress
labels:
app: ingress
spec:
containers:
- name: ingress
image: docker:5000/mma/neurotec-ingress
imagePullPolicy: Always
kubectl get pods -n my_namespace
NAME READY STATUS RESTARTS AGE
ingress 1/1 Running 1 11d
The pod is a server that listens on port 8080.
I also have a service defined that exports the pod to the outside:
apiVersion: v1
kind: Service
metadata:
name: ingress
labels:
app: ingress
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: ingress
type: LoadBalancer
kubectl describe service -n my_namespace ingress
Name: ingress
Namespace: my_namespace
Labels: app=ingress
Selector: app=ingress
Type: LoadBalancer
IP: 10.104.95.96
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.16.1.232:8080
I now want to send message to the server from my local computer. First thing I want to do is to make sure that its IP address is reachable. However, a simple host command returns errors:
host 10.16.1.232
Host 10.16.1.232 not found: 3(NXDOMAIN)
host ingress.my_namespace.nt // .nt is company's prefix
Host ingress.my_namespace not found: 3(NXDOMAIN)
and if I try to run telepresence it also returns an error:
Looks like there's a bug in our code. Sorry about that!
Traceback (most recent call last):
File "/usr/bin/telepresence/telepresence/cli.py", line 135, in crash_reporting
yield
File "/usr/bin/telepresence/telepresence/main.py", line 65, in main
remote_info = start_proxy(runner)
File "/usr/bin/telepresence/telepresence/proxy/__init__.py", line 138, in start_proxy
run_id=run_id,
File "/usr/bin/telepresence/telepresence/proxy/remote.py", line 144, in get_remote_info
runner, deployment_name, deployment_type, run_id=run_id
File "/usr/bin/telepresence/telepresence/proxy/remote.py", line 91, in get_deployment_json
return json.loads(output)["items"][0]
IndexError: list index out of range
Question: how to reach a server on kubernetes from your local machine?

I think you should use service type: LoadBalancer. (not ClusterIP)
from kubernetes documentation
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the
cluster. This is the default ServiceType.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which
the external load balancer routes, are automatically created.
Below is my sample loadbalaner service file. I exposed port 80 to the internet and map port 80 to pod with label run: my-web-portal port 8000.
apiVersion: v1
kind: Service
metadata:
name: my-web-portal-svc
labels:
run: my-web-portal-svc
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
name: port-80
selector:
run: my-web-portal
type: LoadBalancer
and here is my deployments yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-portal
spec:
selector:
matchLabels:
run: my-web-portal
replicas: 1
template:
metadata:
labels:
run: my-web-portal
spec:
containers:
- name: backend
image: xxxxxx-backend
to get endpoints, I run command
kubectl describe service my-web-portal-svc
Command will print out the endpoints which used to connect to your pods
Type: LoadBalancer
IP: 10.100.108.141
LoadBalancer Ingress: xxxxxxxxxxxxxx.us-west-2.elb.amazonaws.com
Leave comments if you have any questions.

Related

Pods can't communicate in Kubernetes

I have two pods. One pod (main) has an endpoint that queries other pod(other) to get the result. Both pods have services of type ClusterIP, and the main pod also has an ingress. The main pod is not able to connect to other pod to query at the given endpoint.
In the above image, / endpoint works, but /other endpoint fails.
Below are the config files:
# main-service.yaml
apiVersion: v1
kind: Service
metadata:
name: main-service
labels:
name: main-service-label
spec:
selector:
app: main # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8001
protocol: TCP
targetPort: 8001
# other-service.yaml
apiVersion: v1
kind: Service
metadata:
name: other-service
labels:
name: other-service-label
spec:
selector:
app: other # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8002
protocol: TCP
targetPort: 8002
All the docker images, deployment files, ingress etc are made available at: this repo.
Note:
I entered the other pod using kubectl exec, and I am able to make curl request to main pod, but not vice versa. Not sure what is going wrong.
All pods, services are in default namespace.

Cannot connect to Kubernetes NodePort Service

I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress

Bare-metal k8s ingress with nginx-ingress

I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster
Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer

MetalLB External IP to Internet

I can't access to public IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP.
I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing.
I used this manifest for installing nginx:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
It works, pods are running. I see the external IP adress after:
kubectl get services
From each node/host I can curl to that ip and port and I can get nginx's:
<h1>Welcome to nginx!</h1>
So far, so good. BUT:
What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.
What I'm missing?
Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?
I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue.
Thanks.
I am having the very same hardware layout:
a 3-Nodes Kubernetes Cluster - here with the 3 IPs:
| 123.223.149.27
| 22.36.211.68
| 192.77.11.164 |
running on (different) VPS-Providers (connected to a running cluster(via JOIN), of course)
Target: "expose" the nginx via metalLB, so I can access my web-app from outside the cluster via browser via the IP of one of my VPS'
Problem: I do not have a "range of IPs" I could declare for the metallb
Steps done:
create one .yaml file for the Loadbalancer, the kindservicetypeloadbalancer.yaml
create one .yaml file for the ConfigMap, containing the IPs of the 3 nodes, the kindconfigmap.yaml
``
### start of the kindservicetypeloadbalancer.yaml
### for ensuring a unique name: loadbalancer name nginxloady
apiVersion: v1
kind: Service
metadata:
name: nginxloady
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
``
below, the second .yaml file to be added to the Cluster:
# start of the kindconfigmap.yaml
## info: the "production-public-ips" can be found
## within the annotations-sector of the kind: Service type: loadbalancer / the kindservicetypeloadbalancer.yaml
## as well... ...namespace: metallb-system & protocol: layer2
## note: as you can see, I added a /32 after every of my node-IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
- 192.77.11.164/32
``
add the LoadBalancer:
kubectl apply -f kindservicetypeloadbalancer.yaml
add the ConfigMap:
kubectl apply -f kindconfigmap.yaml
Check the status of the namespace ( "n" ) metallb-system:
kubectl describe pods -n metallb-system
PS:
actually it is all there:
https://metallb.universe.tf/installation/
and here:
https://metallb.universe.tf/usage/#requesting-specific-ips
What you are missing is a routing policy
Your external IP addresses must belong to the same network as your nodes or instead of that you can add a route to your external address at your default gateway level and use a static NAT for each address

Unable to access exposed port on kubernetes

I have build a custom tcserver image exposing port 80 8080 and 8443. Basically you have an apache and inside the configuration you have a proxy pass to forward it to the tcserver tomcat.
EXPOSE 80 8080 8443
After that I created a kubernetes yaml to build the pod exposing only port 80.
apiVersion: v1
kind: Pod
metadata:
name: tcserver
namespace: default
spec:
containers:
- name: tcserver
image: tcserver-test:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And the service along with it.
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: tcserver
But the problem is that I'm unable to access it.
If I log to the pod (kubectl exec -it tcserver -- /bin/bash), I'm able to do a curl -k -v http://localhost and it will reply.
I believe I'm doing something wrong with the service, but I don't know what.
Any help will be appreciated.
SVC change
As suggested by sfgroups, I added the targetPort: 80 to the svc, but still not working.
When I try to curl the IP, I get a No route to host
[root#testmaster tcserver]# curl -k -v http://172.30.62.162:30080/
* About to connect() to 172.30.62.162 port 30080 (#0)
* Trying 172.30.62.162...
* No route to host
* Failed connect to 172.30.62.162:30080; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.62.162:30080; No route to host
This is the describe from the svc:
[root#testmaster tcserver]# kubectl describe svc tcserver-svc
Name: tcserver-svc
Namespace: default
Labels: app=tcserver
Annotations: <none>
Selector: app=tcserver
Type: NodePort
IP: 172.30.62.162
Port: <unset> 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
When you look at the kubectl describe service output, you'll see it's not actually attached to any pods:
Endpoints: <none>
That's because you say in the service spec that the service will attach to pods labeled with app: tcserver
spec:
selector:
app: tcserver
But, in the pod spec's metadata, you don't specify any labels at all
metadata:
name: tcserver
namespace: default
# labels: {}
And so the fix here is to add to the pod spec the appropriate label
metadata:
labels:
app: tcserver
Also note that it's a little unusual in practice to deploy a bare pod. Usually they're wrapped up in a higher-level controller, most often a deployment, that actually creates the pods. The deployment spec has a template pod spec and it's the pod's labels that matter.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcserver
# Labels here are useful, but the service doesn't look for them
spec:
template:
metadata:
labels:
# These labels are what the service cares about
app: tcserver
spec:
containers: [...]
I see target post is missing, can you add traget port and test?
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
targetPort: 80
selector:
app: tcserver