I'm also trying to expose a mysql server instance on a local kubernetes installation(1 master and one node, both on oracle linux) but I not being able to access to the pod.
The pod configuration is this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: docker.io/mariadb
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
And the service file:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
name: mysql
I can see that the pod is is running:
# kubectl get pod mysql
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 3d
And the service is connected to an endpoint:
# kubectl describe service mysql
Name: mysql
Namespace: default
Labels: name=mysql
Selector: name=mysql
Type: NodePort
IP: 10.254.200.20
Port: <unset> 3306/TCP
NodePort: <unset> 30306/TCP
Endpoints: 11.0.14.2:3306
Session Affinity: None
No events.
I can see on netstat that kube-proxy is listening on port 30306 for all incoming connections.
tcp6 6 0 :::30306 :::* LISTEN 53039/kube-proxy
But somehow I don't get a response from mysql even on the localhost.
# telnet localhost 30306
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Whereas a normal mysql installation responds with something of the following:
$ telnet [REDACTED] 3306
Trying [REDACTED]...
Connected to [REDACTED].
Escape character is '^]'.
N
[REDACTED]-log�gw&TS(gS�X]G/Q,(#uIJwmysql_native_password^]
Notice the mysql part in the last line.
On a final note there is this kubectl output:
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 9d
mysql 10.254.200.20 nodes 3306/TCP 1h
But I don't understand what "nodes" mean in the EXTERNAL-IP column.
So what I want to happen is to open the access to the mysql service through the master IP(preferrably). How do I do that and what am I doing wrong?
I'm still not sure how to make clients connect to a single server that transparently routes all connections to the minions.
-> To do this you need a load balancer, which unfortunately is not a default Kubernetes building bloc.
You need to set up a reverse proxy that will send the traffic to the minion, like a nginx pod and a service using hostPort: <port> that will bind the port to the host. That means the pod needs to stay on that node, and to do that you would want to use a DaemonSet that uses the node name as selector for example.
Obviously, this is not very fault tolerant, so you can setup multiple reverse proxies and use DNS round robin resolution to forward traffic to one of the proxy pods.
Somewhere, at some point, you need a fixed IP to talk to your service over the internet, so you need to insure there is a static pod somewhere to handle that.
The NodePort is exposed on each Node in your cluster via the kube-proxy service. To connect, use the IP of that host (Node01) to connect to:
telnet [IpOfNode] 30306
Related
I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
Example process on how to create a Kubernetes Service object that exposes an external IP address :
**Creating a service for an application running in five pods: **
Run a Hello World application in your cluster:
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a Deployment object and an associated ReplicaSet object. The ReplicaSet has five Pods, each of which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as , wait for a minute and enter the same command again.
Display detailed information about the Service:
kubectl describe services my-service
The output is similar to this:
Name: my-service
Namespace: default
Labels: run=load-balancer-example
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events:
Make a note of the external IP address exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port. In this example, the port is 8080.
In the preceding output, you can see that the service has several endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal addresses of the pods that are running the Hello World application. To verify these are pod addresses, enter this command:
kubectl get pods --output=wide
The output is similar to this:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address to access the Hello World application:
curl http://<external-ip>:<port>
where <external-ip> is the external IP address of your Service, and <port> is the value of Port in your Service description.
The response to a successful request is a hello message:
Hello Kubernetes!
Please refer to How to Use external IP in GKE and Exposing an External IP Address to Access an Application in a Cluster for more information.
I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.
Background
I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.
Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the / route.
DockerFile:
FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
I have the following pod spec:
web-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
The following service:
web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
And the following deployment:
web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
All the objects are up and running and look good after I create them with kubectl.
I do this:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Then, as per a book I'm reading if I do:
$ curl $(minikube ip):8080 # or :32177, # or :3000
I get no response.
I found when I do this, however I can access the app by going to http://127.0.0.1:52650/:
$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
🏃 Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
Questions
what this "tunnel" is and why we need it?
what the targetPort is for (8080)?
What this line means when I do kubectl get services:
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Specifically, what is that port mapping means and where 32177 comes from?
Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?
Let me answer on all your questions.
0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a replicaset which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with deployments in kubernetes.
1 - Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with LoadBalancer service type. Please refer to access applications in minikube.
1.1 - Reason why the application is not accessible on the localhost:NodePort is NodePort is exposed within VM where minikube is running, not on your local machine.
You can find minikube VM's IP by running minikube IP and then curl %GIVEN_IP:NodePort. You should get a response from your app.
2 - targetPort indicates the service with which port connection should be established. Please refer to define the service.
In minikube it may be confusing since it's pointed to the service port, not to the targetPort which is define within the service. I think idea was to indicate on which port service is accessible within the cluster.
3 - As for this question, there are headers presented, you can treat them literally. For instance:
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
NodePort comes from your web-service.yaml for service object. Type is explicitly specified and therefore NodePort is allocated. If you don't specify type of service, it will be created as ClusterIP type and will be accessible only within kubernetes cluster. Please refer to Publishing Services (ServiceTypes).
When service is created with ClusterIP type, there won't be a NodePort in output. E.g.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
External-IP will pop up when LoadBalancer service type is used. Additionally for minikube address will appear once you run minikube tunnel in a different shell. After your service will be accessible on your host machine by External-IP + service port.
4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:
Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.
Please refer to define a service
Edit:
Depending on the driver of minikube (usually this is a virtual box or docker - can be checked on linux VM in .minikube/profiles/minikube/config.json), minikube can have different port forwarding. E.g. I have a minikube based on docker driver and I can see some mappings:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entr…" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
For instance 22 for ssh to ssh into minikube VM. This may be an answer why you got response from http://127.0.0.1:52650/
I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container
apiVersion: v1
kind: Service
metadata:
name: uisvc
namespace: default
labels:
helm.sh/chart: foo-1
app.kubernetes.io/name: foo
app.kubernetes.io/instance: rb-foo
spec:
clusterIP: None
ports:
- name: http
port: 8090
targetPort: 8080
selector:
app.kubernetes.io/component: uisvc
After installing the helm, when I run kubectl get svc, I get the following output
fooaccess ClusterIP None <none> 8888/TCP 119m
fooset ClusterIP None <none> 8080/TCP 119m
foobus ClusterIP None <none> 6379/TCP 119m
uisvc ClusterIP None <none> 8090/TCP 119m
However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.
Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?
Edit: Output for kubectl describe svc uisvc
Name: uisvc
Namespace: default
Labels: app.kubernetes.io/instance=foo-rba
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rba
helm.sh/chart=rba-1
Annotations: meta.helm.sh/release-name: foo
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=uisvc
Type: ClusterIP
IP: None
Port: http 8090/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.8:8080
Session Affinity: None
Events: <none>
This is expected behavior since you used headless service.
Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.
Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.
With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.
For more reading please visit:
Services-netowrking/headless-services
This stack questions with great answer explaining how headless services work
I am trying to refresh my K8s knowledge and am following this tutorial, but am running in some problems. My current cluster (minikube) contains one pod called kubia. This pod is alive and well and contains a simple Webserver.
I want to expose that server via a kubectl expose pod kubia --type=LoadBalancer --name kubia-http.
Problem: According to my K8s dashboard, kubia-http gets stuck on startup.
Debugging:
kubectl describe endpoints kubia-http gives me
Name: kubia-http
Namespace: default
Labels: run=kubia
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-11-20T15:41:29Z
Subsets:
Addresses: 172.17.0.5
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8080 TCP
Events: <none>
When debugging I tried to answer the following questions:
1.) Is my service missing an endpoint?
kubectl get pods --selector=run=kubia gives me one kubia pod. So, I am not missing an endpoint.
2.) Does my service try to access the wrong port when communicating with the pod?
From my pod yaml:
containers:
- name: kubia
ports:
- containerPort: 8080
protocol: TCP
From my service yaml:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32689
The service tries to access the correct port.
What is a good approach to debug this problem?
How does the below command output looks like?
kubectl get services kubia-http
kubectl describe services kubia-http
Does everything looks normal there?
I think you are facing similar issue mentioned in this question.
So if kubectl get services kubia-http looks good except the known expected behavior external ip pending on minikube, you should able to access the service using nodeport or clusterip
Kubernetes version:
v1.10.3
Docker version:
17.03.2-ce
Operating system and kernel:
Centos 7
Steps to Reproduce:
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
Results:
[root#rd07 rd]# kubectl describe services example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations:
Selector: run=load-balancer-example
Type: NodePort
IP: 10.108.214.162
Port: 9090/TCP
TargetPort: 9090/TCP
NodePort: 31105/TCP
Endpoints: 192.168.1.23:9090,192.168.1.24:9090
Session Affinity: None
External Traffic Policy: Cluster
Events:
Expected:
Expect to be able to curl the cluster ip defined in the kubernetes service
I'm not exactly sure which is the so called "public-node-ip", so I tried every related ip address, only when using the master ip as the "public-node-ip" it shows "No route to host".
I used "netstat" to check if the endpoint is listened.
I tried "https://github.com/rancher/rancher/issues/6139" to flush my iptables, and it was not working at all.
I tried "https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/", "nslookup hostnames.default" is not working.
The services seems working perfectly fine, but the services still cannot be accessed.
I'm using "calico" and the "flannel" was also tried.
I tried so many tutorials of apply services, they all cannot be accessed.
I'm new to kubernetes, plz if anyone could help me.
If you are on any public cloud you are not supposed to get public ip address at ip a command. But even though the port will be exposed to 0.0.0.0:31105
Here is the sample file you can verify for your configuration:
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: app-name
name: bss
namespace: default
spec:
externalIPs:
- 172.16.2.2
- 172.16.2.3
- 172.16.2.4
externalTrafficPolicy: Cluster
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
k8s-app: bss
sessionAffinity: ClientIP
type: LoadBalancer
status:
loadBalancer: {}
Just replace your <private-ip> at externalIPs: and do curl your public ip with your node port.
If you are using any cloud to deploy application, Also verify configuration from cloud security groups/firewall for opening port.
Hope this may help.
Thank you!
My k8s cluster is 1 master and 1 node.
The service pod is running on the node.
So I used http://nodeip:31105, it shows "Hello Kubernetes!".
But http://masterip:31105 still not working, is it suppose to be right?
I checked the endpoint listen, 31105 is listened on master.